Jan 21 10:56:51 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 10:56:51 crc restorecon[4677]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:51 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:52 crc restorecon[4677]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:56:52 crc restorecon[4677]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 10:56:52 crc kubenswrapper[4881]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:56:52 crc kubenswrapper[4881]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 10:56:52 crc kubenswrapper[4881]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:56:52 crc kubenswrapper[4881]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:56:52 crc kubenswrapper[4881]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 10:56:52 crc kubenswrapper[4881]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.900886 4881 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904179 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904203 4881 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904209 4881 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904214 4881 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904219 4881 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904225 4881 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904232 4881 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904238 4881 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904243 4881 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904247 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904252 4881 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904260 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904267 4881 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904273 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904278 4881 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904282 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904286 4881 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904291 4881 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904295 4881 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904299 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904305 4881 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904309 4881 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904312 4881 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904317 4881 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904320 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904324 4881 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904330 4881 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904335 4881 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904341 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904345 4881 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904350 4881 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904355 4881 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904360 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904365 4881 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904370 4881 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904375 4881 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904379 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904384 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904389 4881 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904393 4881 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904399 4881 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904403 4881 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904408 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904412 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904433 4881 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904438 4881 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904442 4881 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904446 4881 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904450 4881 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904455 4881 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904458 4881 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904463 4881 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904467 4881 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904471 4881 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904481 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904486 4881 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904491 4881 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904495 4881 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904499 4881 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904504 4881 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904508 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904512 4881 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904516 4881 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904520 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904525 4881 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904529 4881 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904534 4881 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904538 4881 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904542 4881 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904545 4881 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.904549 4881 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904662 4881 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904675 4881 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904687 4881 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904696 4881 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904703 4881 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904708 4881 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904715 4881 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904722 4881 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904727 4881 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904733 4881 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904737 4881 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904743 4881 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904748 4881 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904753 4881 flags.go:64] FLAG: --cgroup-root="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904758 4881 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904763 4881 flags.go:64] FLAG: --client-ca-file="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904768 4881 flags.go:64] FLAG: --cloud-config="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904773 4881 flags.go:64] FLAG: --cloud-provider="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904777 4881 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904812 4881 flags.go:64] FLAG: --cluster-domain="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904818 4881 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904823 4881 flags.go:64] FLAG: --config-dir="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904828 4881 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904834 4881 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904840 4881 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904845 4881 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904849 4881 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904853 4881 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904857 4881 flags.go:64] FLAG: --contention-profiling="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904861 4881 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904867 4881 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904871 4881 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904875 4881 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904880 4881 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904884 4881 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904888 4881 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904892 4881 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904896 4881 flags.go:64] FLAG: --enable-server="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904901 4881 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904910 4881 flags.go:64] FLAG: --event-burst="100" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904914 4881 flags.go:64] FLAG: --event-qps="50" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904918 4881 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904922 4881 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904926 4881 flags.go:64] FLAG: --eviction-hard="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904931 4881 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904940 4881 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904944 4881 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904948 4881 flags.go:64] FLAG: --eviction-soft="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904952 4881 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904956 4881 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904960 4881 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904964 4881 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904968 4881 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904972 4881 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904976 4881 flags.go:64] FLAG: --feature-gates="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904988 4881 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904993 4881 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.904997 4881 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905001 4881 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905005 4881 flags.go:64] FLAG: --healthz-port="10248" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905009 4881 flags.go:64] FLAG: --help="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905013 4881 flags.go:64] FLAG: --hostname-override="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905018 4881 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905022 4881 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905026 4881 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905031 4881 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905035 4881 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905039 4881 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905043 4881 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905047 4881 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905051 4881 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905055 4881 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905060 4881 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905064 4881 flags.go:64] FLAG: --kube-reserved="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905068 4881 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905071 4881 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905076 4881 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905080 4881 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905084 4881 flags.go:64] FLAG: --lock-file="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905087 4881 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905091 4881 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905096 4881 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905101 4881 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905106 4881 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905110 4881 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905114 4881 flags.go:64] FLAG: --logging-format="text" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905117 4881 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905122 4881 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905126 4881 flags.go:64] FLAG: --manifest-url="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905130 4881 flags.go:64] FLAG: --manifest-url-header="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905135 4881 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905144 4881 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905150 4881 flags.go:64] FLAG: --max-pods="110" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905154 4881 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905158 4881 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905162 4881 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905166 4881 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905170 4881 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905174 4881 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905178 4881 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905190 4881 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905198 4881 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905208 4881 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905213 4881 flags.go:64] FLAG: --pod-cidr="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905219 4881 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905232 4881 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905236 4881 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905241 4881 flags.go:64] FLAG: --pods-per-core="0" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905245 4881 flags.go:64] FLAG: --port="10250" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905249 4881 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905253 4881 flags.go:64] FLAG: --provider-id="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905257 4881 flags.go:64] FLAG: --qos-reserved="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905261 4881 flags.go:64] FLAG: --read-only-port="10255" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905265 4881 flags.go:64] FLAG: --register-node="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905269 4881 flags.go:64] FLAG: --register-schedulable="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905273 4881 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905283 4881 flags.go:64] FLAG: --registry-burst="10" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905287 4881 flags.go:64] FLAG: --registry-qps="5" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905291 4881 flags.go:64] FLAG: --reserved-cpus="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905296 4881 flags.go:64] FLAG: --reserved-memory="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905301 4881 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905305 4881 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905309 4881 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905314 4881 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905318 4881 flags.go:64] FLAG: --runonce="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905322 4881 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905326 4881 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905335 4881 flags.go:64] FLAG: --seccomp-default="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905339 4881 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905343 4881 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905348 4881 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905352 4881 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905356 4881 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905360 4881 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905365 4881 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905370 4881 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905375 4881 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905379 4881 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905384 4881 flags.go:64] FLAG: --system-cgroups="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905388 4881 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905395 4881 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905399 4881 flags.go:64] FLAG: --tls-cert-file="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905403 4881 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905410 4881 flags.go:64] FLAG: --tls-min-version="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905414 4881 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905419 4881 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905423 4881 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905427 4881 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905431 4881 flags.go:64] FLAG: --v="2" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905436 4881 flags.go:64] FLAG: --version="false" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905442 4881 flags.go:64] FLAG: --vmodule="" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905447 4881 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905452 4881 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905595 4881 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905604 4881 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905610 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905613 4881 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905617 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905621 4881 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905625 4881 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905630 4881 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905635 4881 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905640 4881 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905653 4881 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905658 4881 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905662 4881 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905666 4881 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905671 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905677 4881 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905682 4881 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905687 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905692 4881 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905697 4881 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905702 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905706 4881 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905709 4881 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905713 4881 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905717 4881 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905720 4881 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905724 4881 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905727 4881 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905731 4881 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905734 4881 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905738 4881 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905741 4881 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905745 4881 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905748 4881 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905752 4881 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905757 4881 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905762 4881 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905766 4881 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905770 4881 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905775 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905779 4881 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905799 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905803 4881 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905807 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905811 4881 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905814 4881 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905830 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905834 4881 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905837 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905841 4881 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905846 4881 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905850 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905854 4881 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905859 4881 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905863 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905867 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905873 4881 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905879 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905884 4881 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905888 4881 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905893 4881 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905897 4881 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905901 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905906 4881 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905912 4881 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905917 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905922 4881 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905926 4881 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905930 4881 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905935 4881 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.905940 4881 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.905954 4881 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.922137 4881 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.922203 4881 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922351 4881 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922367 4881 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922376 4881 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922387 4881 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922397 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922405 4881 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922415 4881 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922423 4881 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922434 4881 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922444 4881 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922453 4881 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922461 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922470 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922478 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922486 4881 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922494 4881 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922503 4881 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922511 4881 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922519 4881 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922526 4881 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922534 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922542 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922550 4881 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922558 4881 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922565 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922573 4881 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922581 4881 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922591 4881 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922600 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922607 4881 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922615 4881 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922623 4881 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922631 4881 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922639 4881 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922646 4881 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922654 4881 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922662 4881 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922670 4881 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922678 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922686 4881 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922693 4881 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922701 4881 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922710 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922717 4881 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922725 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922732 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922741 4881 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922753 4881 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922767 4881 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922775 4881 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922813 4881 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922822 4881 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922831 4881 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922839 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922847 4881 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922855 4881 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922864 4881 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922871 4881 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922882 4881 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922893 4881 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922902 4881 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922913 4881 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922922 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922933 4881 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922941 4881 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922950 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922959 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922971 4881 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.922981 4881 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923001 4881 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923016 4881 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.923034 4881 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923328 4881 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923347 4881 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923357 4881 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923366 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923375 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923383 4881 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923392 4881 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923399 4881 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923409 4881 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923417 4881 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923426 4881 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923434 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923442 4881 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923450 4881 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923457 4881 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923465 4881 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923473 4881 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923481 4881 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923488 4881 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923499 4881 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923508 4881 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923516 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923525 4881 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923533 4881 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923540 4881 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923548 4881 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923559 4881 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923568 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923576 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923584 4881 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923592 4881 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923600 4881 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923608 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923618 4881 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923628 4881 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923636 4881 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923644 4881 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923656 4881 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923664 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923674 4881 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923683 4881 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923691 4881 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923700 4881 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923709 4881 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923717 4881 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923725 4881 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923734 4881 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923742 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923750 4881 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923758 4881 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923766 4881 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923773 4881 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923812 4881 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923824 4881 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923834 4881 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923842 4881 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923851 4881 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923859 4881 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923867 4881 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923875 4881 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923882 4881 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923890 4881 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923897 4881 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923905 4881 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923913 4881 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923920 4881 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923928 4881 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923935 4881 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923943 4881 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923951 4881 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:56:52 crc kubenswrapper[4881]: W0121 10:56:52.923960 4881 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.923973 4881 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.924263 4881 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.929356 4881 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.929485 4881 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.930305 4881 server.go:997] "Starting client certificate rotation" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.930340 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.930946 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-19 04:48:10.680370853 +0000 UTC Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.931145 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.990962 4881 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 10:56:52 crc kubenswrapper[4881]: E0121 10:56:52.992926 4881 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:52 crc kubenswrapper[4881]: I0121 10:56:52.996634 4881 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.009424 4881 log.go:25] "Validated CRI v1 runtime API" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.034067 4881 log.go:25] "Validated CRI v1 image API" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.036373 4881 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.040272 4881 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-10-51-13-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.040325 4881 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.066649 4881 manager.go:217] Machine: {Timestamp:2026-01-21 10:56:53.064927604 +0000 UTC m=+0.324884093 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:5fb73d3d-5879-4958-af84-1cb776cbe5bd BootID:26a8a75a-20da-43b0-891d-353287c7b817 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b9:a7:78 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b9:a7:78 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:69:a7:79 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a2:23:7a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ab:d3:12 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3c:35:72 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:22:ad:c0:b3:94:eb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:54:34:7f:a1:87 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.067124 4881 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.067366 4881 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.067890 4881 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.068118 4881 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.068200 4881 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.068454 4881 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.068508 4881 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.068828 4881 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.068916 4881 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.069264 4881 state_mem.go:36] "Initialized new in-memory state store" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.069413 4881 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.070119 4881 kubelet.go:418] "Attempting to sync node with API server" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.070191 4881 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.070252 4881 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.070307 4881 kubelet.go:324] "Adding apiserver pod source" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.070391 4881 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.072407 4881 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.073172 4881 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 10:56:53 crc kubenswrapper[4881]: W0121 10:56:53.073494 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:53 crc kubenswrapper[4881]: W0121 10:56:53.073503 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.073626 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.073666 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.074322 4881 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075066 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075110 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075126 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075182 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075209 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075221 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075235 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075257 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075273 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075287 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075305 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075317 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.075514 4881 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.076171 4881 server.go:1280] "Started kubelet" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.076331 4881 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.077707 4881 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.078637 4881 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:53 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.079843 4881 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.082579 4881 server.go:460] "Adding debug handlers to kubelet server" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.082087 4881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.4:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cb9c5da8cefc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 10:56:53.076127689 +0000 UTC m=+0.336084198,LastTimestamp:2026-01-21 10:56:53.076127689 +0000 UTC m=+0.336084198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.083027 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.083165 4881 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.083257 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:45:06.279592641 +0000 UTC Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.083703 4881 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.083524 4881 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.088547 4881 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.083543 4881 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.090189 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="200ms" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.091196 4881 factory.go:55] Registering systemd factory Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.091275 4881 factory.go:221] Registration of the systemd container factory successfully Jan 21 10:56:53 crc kubenswrapper[4881]: W0121 10:56:53.091465 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.091897 4881 factory.go:153] Registering CRI-O factory Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.091951 4881 factory.go:221] Registration of the crio container factory successfully Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.091829 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.092175 4881 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.092342 4881 factory.go:103] Registering Raw factory Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.092408 4881 manager.go:1196] Started watching for new ooms in manager Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.100608 4881 manager.go:319] Starting recovery of all containers Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106011 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106085 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106101 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106115 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106127 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106140 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106175 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106190 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106205 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106219 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106232 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106246 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106260 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106277 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106293 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106308 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106320 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106332 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106346 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106362 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106401 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106416 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106430 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106443 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106478 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106491 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106539 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106555 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106567 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106580 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106594 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106606 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106619 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106631 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106643 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106655 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106664 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106674 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106686 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106696 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106706 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106717 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106727 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106740 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106752 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106767 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.106810 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107011 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107033 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107048 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107064 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107078 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107096 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107110 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107150 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107165 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107178 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107192 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107205 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107219 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107234 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107246 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107261 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107276 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107288 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107299 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107310 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107323 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107333 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107344 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107356 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107367 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107376 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107392 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107404 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107415 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107427 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107438 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107452 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107462 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107472 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107484 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107495 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107506 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107517 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107529 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107539 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107553 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107563 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107573 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107586 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107595 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107611 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107624 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107637 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107648 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107657 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107669 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107680 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107690 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107702 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107712 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107724 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107735 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107752 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107769 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107780 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107811 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107824 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107836 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107849 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107859 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107869 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107921 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107940 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107950 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107960 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107970 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107979 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107989 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.107998 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108007 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108018 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108027 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108038 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108047 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108058 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108067 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108077 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108087 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108097 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108107 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108118 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108135 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108163 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108180 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108193 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108211 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108226 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108239 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108254 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108265 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108279 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108292 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108306 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108319 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108329 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108339 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108349 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108359 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108369 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108379 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108388 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108399 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108408 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108417 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108427 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108436 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108446 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108456 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108465 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108476 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108485 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108495 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108505 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108514 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108524 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108533 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108549 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108562 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108572 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.108583 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111106 4881 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111174 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111195 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111222 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111233 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111251 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111262 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111314 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111325 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111335 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111346 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111360 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111370 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111381 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111393 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111420 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111446 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111460 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111478 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111491 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111505 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111520 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111535 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111556 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111571 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111585 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111599 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111613 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111627 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111642 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111657 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111684 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111703 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111719 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111735 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111749 4881 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111769 4881 reconstruct.go:97] "Volume reconstruction finished" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.111827 4881 reconciler.go:26] "Reconciler: start to sync state" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.131781 4881 manager.go:324] Recovery completed Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.145733 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.189392 4881 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.208021 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.210617 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.210743 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.289916 4881 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.291534 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="400ms" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.307603 4881 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.309332 4881 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.309378 4881 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.309407 4881 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.309498 4881 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 10:56:53 crc kubenswrapper[4881]: W0121 10:56:53.310504 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.310579 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.320037 4881 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.320348 4881 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.320456 4881 state_mem.go:36] "Initialized new in-memory state store" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.391128 4881 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.410430 4881 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.491480 4881 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.572090 4881 policy_none.go:49] "None policy: Start" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.574136 4881 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.574175 4881 state_mem.go:35] "Initializing new in-memory state store" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.591805 4881 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.610931 4881 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.649921 4881 manager.go:334] "Starting Device Plugin manager" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.650007 4881 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.650023 4881 server.go:79] "Starting device plugin registration server" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.650433 4881 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.650454 4881 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.650634 4881 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.650720 4881 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.650730 4881 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.663243 4881 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.692876 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="800ms" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.751052 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.753477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.753546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.753569 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.753616 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.754420 4881 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.4:6443: connect: connection refused" node="crc" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.954778 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.957568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.957647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.957672 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:53 crc kubenswrapper[4881]: I0121 10:56:53.957724 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:56:53 crc kubenswrapper[4881]: E0121 10:56:53.958247 4881 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.4:6443: connect: connection refused" node="crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.011351 4881 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.011475 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.013819 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.013873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.013896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.014086 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.014739 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.014851 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.015891 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.015950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.015976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.016168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.016256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.016292 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.016395 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.016737 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.016845 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.018309 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.018348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.018367 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.018349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.018412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.018454 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.018715 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.019004 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.019095 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.020305 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.020352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.020370 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.020402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.020448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.020468 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.020576 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.020665 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.020712 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.021981 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.021997 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.022027 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.022059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.022112 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.022065 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.022383 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.022437 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.023674 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.023735 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.023763 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.079733 4881 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.089118 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:11:34.943616608 +0000 UTC Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122149 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122200 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122222 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122243 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122263 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122294 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122325 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122344 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122358 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122452 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122497 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122519 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122535 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122553 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.122600 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224389 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224458 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224480 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224515 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224529 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224543 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224559 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224576 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224593 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224608 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224625 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224639 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224642 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224655 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224668 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224708 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224733 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224602 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224762 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224766 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224892 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224873 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224927 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224972 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224982 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.225009 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224993 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.224854 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: W0121 10:56:54.300519 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:54 crc kubenswrapper[4881]: E0121 10:56:54.300629 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.358472 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.360236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.360296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.360313 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.360343 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:56:54 crc kubenswrapper[4881]: E0121 10:56:54.361004 4881 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.4:6443: connect: connection refused" node="crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.368276 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: W0121 10:56:54.375968 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:54 crc kubenswrapper[4881]: E0121 10:56:54.376070 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.402321 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: W0121 10:56:54.408064 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d1f64a3aa6d3c1b3c3b96578c4d5a1877bc1ecf13184236aa6dee46ee16f6183 WatchSource:0}: Error finding container d1f64a3aa6d3c1b3c3b96578c4d5a1877bc1ecf13184236aa6dee46ee16f6183: Status 404 returned error can't find the container with id d1f64a3aa6d3c1b3c3b96578c4d5a1877bc1ecf13184236aa6dee46ee16f6183 Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.434818 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.448030 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: I0121 10:56:54.470924 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:56:54 crc kubenswrapper[4881]: W0121 10:56:54.482277 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b3a41306cd1d58e4c9b0156c8faf5da779718dade8c4fe7e6d4fd24592ab423d WatchSource:0}: Error finding container b3a41306cd1d58e4c9b0156c8faf5da779718dade8c4fe7e6d4fd24592ab423d: Status 404 returned error can't find the container with id b3a41306cd1d58e4c9b0156c8faf5da779718dade8c4fe7e6d4fd24592ab423d Jan 21 10:56:54 crc kubenswrapper[4881]: E0121 10:56:54.494493 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="1.6s" Jan 21 10:56:54 crc kubenswrapper[4881]: W0121 10:56:54.500225 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-cd9ddb3fb2ebaa63e62a778a3ed338a647b9936567657852e6e8760686db1d84 WatchSource:0}: Error finding container cd9ddb3fb2ebaa63e62a778a3ed338a647b9936567657852e6e8760686db1d84: Status 404 returned error can't find the container with id cd9ddb3fb2ebaa63e62a778a3ed338a647b9936567657852e6e8760686db1d84 Jan 21 10:56:54 crc kubenswrapper[4881]: W0121 10:56:54.501665 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-31b08728f04ef588c590ff4eeddb6f16c97c8fe8747a36055530b784a88f8bec WatchSource:0}: Error finding container 31b08728f04ef588c590ff4eeddb6f16c97c8fe8747a36055530b784a88f8bec: Status 404 returned error can't find the container with id 31b08728f04ef588c590ff4eeddb6f16c97c8fe8747a36055530b784a88f8bec Jan 21 10:56:54 crc kubenswrapper[4881]: W0121 10:56:54.512146 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-2b6a85bd86b6b6366d81cef6d2b74870e41e24c1f7757a53c0484c424ce6ff07 WatchSource:0}: Error finding container 2b6a85bd86b6b6366d81cef6d2b74870e41e24c1f7757a53c0484c424ce6ff07: Status 404 returned error can't find the container with id 2b6a85bd86b6b6366d81cef6d2b74870e41e24c1f7757a53c0484c424ce6ff07 Jan 21 10:56:54 crc kubenswrapper[4881]: W0121 10:56:54.587103 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:54 crc kubenswrapper[4881]: E0121 10:56:54.587184 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:54 crc kubenswrapper[4881]: W0121 10:56:54.740834 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:54 crc kubenswrapper[4881]: E0121 10:56:54.740941 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.032528 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:56:55 crc kubenswrapper[4881]: E0121 10:56:55.033663 4881 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.080252 4881 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.089609 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:20:36.900195685 +0000 UTC Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.161509 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.164068 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.164121 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.164135 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.164170 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:56:55 crc kubenswrapper[4881]: E0121 10:56:55.164776 4881 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.4:6443: connect: connection refused" node="crc" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.372582 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f" exitCode=0 Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.372734 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f"} Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.373103 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2b6a85bd86b6b6366d81cef6d2b74870e41e24c1f7757a53c0484c424ce6ff07"} Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.373289 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.374511 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.374561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.374586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.375094 4881 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35" exitCode=0 Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.375153 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35"} Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.375222 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"31b08728f04ef588c590ff4eeddb6f16c97c8fe8747a36055530b784a88f8bec"} Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.375349 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.377446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.377482 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.377499 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.378868 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.380903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.380939 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.380949 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.385152 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e"} Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.385231 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd9ddb3fb2ebaa63e62a778a3ed338a647b9936567657852e6e8760686db1d84"} Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.387198 4881 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="10a0569ab7ed4586aadd7deab6398db98bfc9a6afd3903d5466c05021a41632a" exitCode=0 Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.387332 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"10a0569ab7ed4586aadd7deab6398db98bfc9a6afd3903d5466c05021a41632a"} Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.387449 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b3a41306cd1d58e4c9b0156c8faf5da779718dade8c4fe7e6d4fd24592ab423d"} Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.387713 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.390205 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.390247 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.390268 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.390675 4881 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421" exitCode=0 Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.390755 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421"} Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.390835 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d1f64a3aa6d3c1b3c3b96578c4d5a1877bc1ecf13184236aa6dee46ee16f6183"} Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.390971 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.392164 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.392221 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:55 crc kubenswrapper[4881]: I0121 10:56:55.392235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.079370 4881 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.090411 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:48:01.254968776 +0000 UTC Jan 21 10:56:56 crc kubenswrapper[4881]: E0121 10:56:56.095970 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="3.2s" Jan 21 10:56:56 crc kubenswrapper[4881]: W0121 10:56:56.430285 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:56 crc kubenswrapper[4881]: E0121 10:56:56.430391 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.473231 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6cf7bf06a11465e04a80fe7ae667f9c15741137062514a621955622d2b339dce"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.473455 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.474938 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.474965 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.474980 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.482426 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.482474 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.482484 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.486114 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.486156 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.486278 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.486173 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.488166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.488199 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.488217 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.492687 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.492718 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.492731 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.492985 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.501720 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.506536 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.506588 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.509182 4881 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dae46ac7909a717555defd27b6fa785f9c7f927fd7806c7941529c2e64ee3700" exitCode=0 Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.509251 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dae46ac7909a717555defd27b6fa785f9c7f927fd7806c7941529c2e64ee3700"} Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.509470 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.510613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.510632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.510641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.765138 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.766641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.766704 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.766718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:56 crc kubenswrapper[4881]: I0121 10:56:56.766753 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:56:56 crc kubenswrapper[4881]: E0121 10:56:56.767623 4881 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.4:6443: connect: connection refused" node="crc" Jan 21 10:56:56 crc kubenswrapper[4881]: W0121 10:56:56.790054 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:56 crc kubenswrapper[4881]: E0121 10:56:56.790154 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:56 crc kubenswrapper[4881]: W0121 10:56:56.928248 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:56 crc kubenswrapper[4881]: E0121 10:56:56.928408 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.090550 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:19:06.405077135 +0000 UTC Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.132812 4881 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.515517 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f"} Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.515595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f"} Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.515629 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.517140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.517178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.517187 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.518925 4881 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6b3e4e88955652dacaa965ab4ff099595a6bb920836bfd4ad703984e00029b98" exitCode=0 Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.519033 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6b3e4e88955652dacaa965ab4ff099595a6bb920836bfd4ad703984e00029b98"} Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.519084 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.519116 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.519118 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.519222 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.520206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.520262 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.520286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.520293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.520395 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.520413 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.520577 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.520594 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:57 crc kubenswrapper[4881]: I0121 10:56:57.520602 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:57 crc kubenswrapper[4881]: W0121 10:56:57.879302 4881 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:57 crc kubenswrapper[4881]: E0121 10:56:57.879394 4881 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.080639 4881 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.091581 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:24:12.965133242 +0000 UTC Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.402928 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.524116 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.525771 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f" exitCode=255 Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.525860 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f"} Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.525942 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.526977 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.527027 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.527044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.528659 4881 scope.go:117] "RemoveContainer" containerID="d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f" Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.535471 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"06c96476b642e401c90a3f6810ea1624e2914188ba139b9303b963f1d5bc1f30"} Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.535741 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3c05c062aefb9117f9f961f35221b8fa36b3374a184edcedea404d33539be0b6"} Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.535890 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dfeb13ada78bc1504e657a94ab793ae27d4dbd9f333df47b951323f4e642e869"} Jan 21 10:56:58 crc kubenswrapper[4881]: I0121 10:56:58.536235 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c781ff2e87fbae055bac0e3f8f77e2eeee8aa4e38c83ff4b49645798949c550c"} Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.092124 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:21:39.913174089 +0000 UTC Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.136004 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.136345 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.137963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.137991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.138004 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.318168 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.542609 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.544851 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570"} Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.544909 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.545073 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.545735 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.545832 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.545856 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.549210 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5cc29934ce0927ee4fdd2c97ca3bbbcaaf6287060d05447572edeefa8a66af25"} Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.549349 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.550250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.550307 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.550330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.659471 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.968194 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.969473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.969684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.969911 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:56:59 crc kubenswrapper[4881]: I0121 10:56:59.970076 4881 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:57:00 crc kubenswrapper[4881]: I0121 10:57:00.092595 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:56:20.963831143 +0000 UTC Jan 21 10:57:00 crc kubenswrapper[4881]: I0121 10:57:00.557066 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:00 crc kubenswrapper[4881]: I0121 10:57:00.557165 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:00 crc kubenswrapper[4881]: I0121 10:57:00.557068 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:00 crc kubenswrapper[4881]: I0121 10:57:00.559332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:00 crc kubenswrapper[4881]: I0121 10:57:00.559437 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:00 crc kubenswrapper[4881]: I0121 10:57:00.559546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:00 crc kubenswrapper[4881]: I0121 10:57:00.559549 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:00 crc kubenswrapper[4881]: I0121 10:57:00.559691 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:00 crc kubenswrapper[4881]: I0121 10:57:00.559715 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.093297 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:05:24.621131033 +0000 UTC Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.234363 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.234733 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.236527 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.236556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.236565 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.243743 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.559494 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.559741 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.560549 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.563228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.563247 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.563265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.563272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.563278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:01 crc kubenswrapper[4881]: I0121 10:57:01.563286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.094371 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:56:26.913863601 +0000 UTC Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.395617 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.396219 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.401940 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.402007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.402026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.561304 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.562500 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.562561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.562578 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.764119 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.764434 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.766149 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.766235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:02 crc kubenswrapper[4881]: I0121 10:57:02.766270 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:03 crc kubenswrapper[4881]: I0121 10:57:03.095025 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:13:29.349718381 +0000 UTC Jan 21 10:57:03 crc kubenswrapper[4881]: E0121 10:57:03.663952 4881 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 10:57:03 crc kubenswrapper[4881]: I0121 10:57:03.791207 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:03 crc kubenswrapper[4881]: I0121 10:57:03.791481 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:03 crc kubenswrapper[4881]: I0121 10:57:03.793467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:03 crc kubenswrapper[4881]: I0121 10:57:03.793525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:03 crc kubenswrapper[4881]: I0121 10:57:03.793540 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:04 crc kubenswrapper[4881]: I0121 10:57:04.095686 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:58:02.881381328 +0000 UTC Jan 21 10:57:04 crc kubenswrapper[4881]: I0121 10:57:04.799521 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:04 crc kubenswrapper[4881]: I0121 10:57:04.799759 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:04 crc kubenswrapper[4881]: I0121 10:57:04.801551 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:04 crc kubenswrapper[4881]: I0121 10:57:04.801640 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:04 crc kubenswrapper[4881]: I0121 10:57:04.801667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:05 crc kubenswrapper[4881]: I0121 10:57:05.096603 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:38:22.028402051 +0000 UTC Jan 21 10:57:06 crc kubenswrapper[4881]: I0121 10:57:06.097329 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:24:43.271457869 +0000 UTC Jan 21 10:57:06 crc kubenswrapper[4881]: I0121 10:57:06.502833 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:06 crc kubenswrapper[4881]: I0121 10:57:06.503020 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:06 crc kubenswrapper[4881]: I0121 10:57:06.504830 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:06 crc kubenswrapper[4881]: I0121 10:57:06.504895 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:06 crc kubenswrapper[4881]: I0121 10:57:06.504915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:07 crc kubenswrapper[4881]: I0121 10:57:07.098571 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:43:57.414215783 +0000 UTC Jan 21 10:57:07 crc kubenswrapper[4881]: I0121 10:57:07.799755 4881 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 10:57:07 crc kubenswrapper[4881]: I0121 10:57:07.800309 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 10:57:08 crc kubenswrapper[4881]: I0121 10:57:08.099444 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:07:11.619693162 +0000 UTC Jan 21 10:57:09 crc kubenswrapper[4881]: I0121 10:57:09.081047 4881 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:57:09 crc kubenswrapper[4881]: I0121 10:57:09.100668 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:02:31.354629611 +0000 UTC Jan 21 10:57:09 crc kubenswrapper[4881]: E0121 10:57:09.298030 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 21 10:57:09 crc kubenswrapper[4881]: E0121 10:57:09.320767 4881 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 10:57:09 crc kubenswrapper[4881]: I0121 10:57:09.577819 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 10:57:09 crc kubenswrapper[4881]: I0121 10:57:09.577938 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 10:57:09 crc kubenswrapper[4881]: I0121 10:57:09.583482 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 10:57:09 crc kubenswrapper[4881]: I0121 10:57:09.583562 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 10:57:09 crc kubenswrapper[4881]: I0121 10:57:09.692179 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]log ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]etcd ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/priority-and-fairness-filter ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/start-apiextensions-informers ok Jan 21 10:57:09 crc kubenswrapper[4881]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Jan 21 10:57:09 crc kubenswrapper[4881]: [-]poststarthook/crd-informer-synced failed: reason withheld Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/start-system-namespaces-controller ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 21 10:57:09 crc kubenswrapper[4881]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 21 10:57:09 crc kubenswrapper[4881]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/bootstrap-controller ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/start-kube-aggregator-informers ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 21 10:57:09 crc kubenswrapper[4881]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 21 10:57:09 crc kubenswrapper[4881]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]autoregister-completion ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/apiservice-openapi-controller ok Jan 21 10:57:09 crc kubenswrapper[4881]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 21 10:57:09 crc kubenswrapper[4881]: livez check failed Jan 21 10:57:09 crc kubenswrapper[4881]: I0121 10:57:09.692283 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:57:10 crc kubenswrapper[4881]: I0121 10:57:10.101847 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:41:37.268414718 +0000 UTC Jan 21 10:57:11 crc kubenswrapper[4881]: I0121 10:57:11.102960 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:48:46.363813232 +0000 UTC Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.103486 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:38:43.250525484 +0000 UTC Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.433866 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.434153 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.435762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.435830 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.435840 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.448545 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.601740 4881 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.610347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.610413 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:12 crc kubenswrapper[4881]: I0121 10:57:12.610426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:13 crc kubenswrapper[4881]: I0121 10:57:13.103715 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:49:23.182596498 +0000 UTC Jan 21 10:57:13 crc kubenswrapper[4881]: E0121 10:57:13.672419 4881 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.104242 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:21:45.233186623 +0000 UTC Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.569921 4881 trace.go:236] Trace[1415831381]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:57:03.958) (total time: 10610ms): Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[1415831381]: ---"Objects listed" error: 10610ms (10:57:14.569) Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[1415831381]: [10.610859008s] [10.610859008s] END Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.569965 4881 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.570569 4881 trace.go:236] Trace[270957976]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:57:00.226) (total time: 14343ms): Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[270957976]: ---"Objects listed" error: 14343ms (10:57:14.570) Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[270957976]: [14.34349883s] [14.34349883s] END Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.570622 4881 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.570621 4881 trace.go:236] Trace[60241337]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:57:00.503) (total time: 14067ms): Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[60241337]: ---"Objects listed" error: 14067ms (10:57:14.570) Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[60241337]: [14.067103256s] [14.067103256s] END Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.570668 4881 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.572206 4881 trace.go:236] Trace[827334694]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:57:00.237) (total time: 14334ms): Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[827334694]: ---"Objects listed" error: 14334ms (10:57:14.572) Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[827334694]: [14.334322514s] [14.334322514s] END Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.572228 4881 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.575546 4881 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.580548 4881 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.581014 4881 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.582469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.582529 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.582549 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.582575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.582588 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.609406 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.617841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.617886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.617902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.617927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.617941 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.637074 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.646389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.646889 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.646972 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.646956 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56862->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.647172 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56862->192.168.126.11:17697: read: connection reset by peer" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.647089 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.647306 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.658404 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.662970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.663014 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.663025 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.663050 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.663061 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.672993 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.673866 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.673975 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.677391 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.678175 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.681586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.681629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.681643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.681668 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.681680 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.695017 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.695198 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.697277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.697333 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.697346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.697374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.697389 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.800767 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.800844 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.800858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.800886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.800898 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.807122 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.812477 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.904163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.904239 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.904256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.904302 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.904320 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.007417 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.007482 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.007495 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.007524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.007534 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.105131 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:55:36.033715398 +0000 UTC Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.111255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.111308 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.111319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.111341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.111354 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.143285 4881 apiserver.go:52] "Watching apiserver" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.147294 4881 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.147914 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.148293 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.148460 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.148611 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.148703 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.148948 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.149015 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.149045 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.149045 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.149164 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156569 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156703 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156702 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156844 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156875 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156893 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.158838 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.161350 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.194035 4881 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205025 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205322 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205350 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205374 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205393 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205419 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205441 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205520 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205546 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205568 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205593 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205620 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205640 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205660 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205683 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205708 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205729 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205750 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205802 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205831 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205859 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205886 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205912 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205937 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205960 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205983 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206004 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206028 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206051 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206072 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206092 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206113 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206135 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206155 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206176 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206203 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206226 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206249 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206270 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206294 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206317 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206456 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206483 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206512 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206572 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206597 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206618 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206639 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206662 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206702 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206729 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206754 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206777 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206821 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206848 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206870 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206896 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206919 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206926 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206947 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206971 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206994 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207018 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207039 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207062 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207084 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207106 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207128 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207150 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207172 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207197 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207224 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207249 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207269 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207297 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207323 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207344 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207366 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207386 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207406 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207428 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207449 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207472 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207496 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207538 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207564 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207606 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207630 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207654 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207677 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207703 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207728 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207750 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207773 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213495 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213629 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213658 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213684 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213715 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213739 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213764 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213812 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213841 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213866 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213888 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213913 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213951 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213982 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214005 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214029 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214055 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214082 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214104 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214127 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214149 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214169 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214193 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214214 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214237 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214257 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214278 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214298 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214323 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214346 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214370 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214394 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214415 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214436 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214461 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214482 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214504 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214526 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214548 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214598 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214624 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214645 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214667 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214689 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214721 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214745 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214767 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214807 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214830 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214853 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214876 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214899 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214921 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214943 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214966 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214988 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215009 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215032 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215055 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215079 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215098 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215121 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215142 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215164 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215198 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215220 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215240 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215264 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215287 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215308 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215332 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215355 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215378 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215399 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215420 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215443 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215464 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215486 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215530 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215553 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215576 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215599 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215621 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215643 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215666 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215687 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215711 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215733 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215752 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215773 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215820 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215844 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215878 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215984 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216020 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216043 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216066 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216090 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216118 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216142 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207223 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216127 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207436 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207819 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.208245 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.208496 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.208904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.208938 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209141 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209201 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209386 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209387 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209632 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209687 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209827 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209907 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.210014 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.210933 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211207 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211284 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211285 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211376 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211496 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211558 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211593 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211603 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211746 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211893 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212060 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212072 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212105 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212259 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212283 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212329 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212420 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212416 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212625 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212655 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212686 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212774 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212848 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212890 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213035 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213059 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213100 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213130 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213249 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.217032 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213330 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216136 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216494 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216733 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.217052 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.217454 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.217758 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.220717 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.221008 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.221144 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.221261 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.221345 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.221878 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.222132 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.222180 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.222266 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:15.722238804 +0000 UTC m=+22.982195473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.222790 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.222925 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.224088 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.225431 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.232242 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.233963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.234819 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.242085 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.248281 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.251779 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.259368 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260266 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260487 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260495 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260566 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260582 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260615 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260969 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.261250 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.261492 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.262554 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.262795 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.262841 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.262943 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.262979 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.263076 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.263248 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.263359 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.263687 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264175 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264223 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264272 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264356 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264629 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264656 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264439 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264670 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264693 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264879 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.265335 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.265370 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.265487 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.265496 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.265898 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266095 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266127 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266118 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266328 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266594 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266626 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266714 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.267235 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.267318 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.267389 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.267557 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.267738 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268020 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268096 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268358 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268382 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268510 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268549 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268629 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268720 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269259 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269322 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269530 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269588 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269651 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269756 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269866 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269951 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270046 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270085 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270157 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270252 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270384 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270434 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270524 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270608 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270797 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270817 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270995 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271211 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271250 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271253 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271275 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271472 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271570 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271595 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271596 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271712 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271983 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272089 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272210 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272292 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272668 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272646 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273018 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273108 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273348 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216168 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273454 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273479 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273499 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273533 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273531 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273562 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272889 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273743 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273885 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.273932 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.273980 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:15.773965832 +0000 UTC m=+23.033922291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.274032 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.274062 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:15.774055604 +0000 UTC m=+23.034012183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273702 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274094 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274114 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274132 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274152 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274169 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274187 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274205 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274226 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274242 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274264 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274357 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274380 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274394 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274403 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274422 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274442 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274455 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274468 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274483 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274499 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274511 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274524 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274535 4881 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274548 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274560 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274572 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274584 4881 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283322 4881 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283774 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283816 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283828 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283840 4881 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283851 4881 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283864 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283874 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283884 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283899 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283908 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283919 4881 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283929 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283939 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283949 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283961 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283972 4881 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283981 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283989 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283998 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284007 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284016 4881 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284026 4881 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284034 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284045 4881 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284054 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284063 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284072 4881 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284082 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284091 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284101 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284111 4881 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284120 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284131 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284142 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284151 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284160 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284170 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284179 4881 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284188 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284198 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284209 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284217 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284227 4881 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284239 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284249 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284260 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284271 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284280 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284289 4881 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284299 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284309 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284319 4881 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284335 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284345 4881 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284375 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284386 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284396 4881 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284406 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284416 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284426 4881 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284440 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284450 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284459 4881 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284470 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284480 4881 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284490 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284498 4881 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284507 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284517 4881 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284526 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284537 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284548 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284556 4881 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284564 4881 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284573 4881 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284582 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284591 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284601 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284611 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284620 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284629 4881 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284640 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284650 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284659 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284668 4881 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284677 4881 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284686 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284695 4881 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284704 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284714 4881 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284723 4881 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284733 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284743 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284751 4881 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284762 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284772 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284785 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284806 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284816 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284825 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284834 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284844 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284853 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284861 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284870 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284879 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284888 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284898 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284906 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284917 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284927 4881 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284936 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284945 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284954 4881 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284963 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284972 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284981 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284990 4881 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284999 4881 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285008 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285017 4881 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285026 4881 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285036 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285046 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285056 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285065 4881 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285075 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285084 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285093 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285103 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285112 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285122 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285130 4881 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285139 4881 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285148 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285156 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285165 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285174 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285182 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285191 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285200 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285209 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285218 4881 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285228 4881 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285238 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285250 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285258 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285269 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285278 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285287 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274278 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274525 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274604 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273696 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274661 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.275342 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.275976 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.277177 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.277472 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.278848 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.282225 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.282492 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.286986 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.287217 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.287434 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.287877 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.293137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.293231 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.294923 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.295009 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.296053 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.296779 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.297161 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.298134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.301256 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.302494 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.302810 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.304354 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.304493 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.305881 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.309651 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.313960 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.314628 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.315952 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.316642 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.317832 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.318507 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.319598 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.320930 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.321690 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.322839 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.324225 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.326327 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.327931 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.328518 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.328572 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.328591 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.328685 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:15.828653874 +0000 UTC m=+23.088610343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.329471 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.331387 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.332119 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.333470 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.334097 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.334855 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.335976 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.336555 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.337412 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.338281 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.338776 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.339988 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.340876 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.342266 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.342486 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.342502 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.342513 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.342590 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:15.842569035 +0000 UTC m=+23.102525504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.346480 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.346472 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.347262 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.348062 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.348509 4881 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.348610 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.350892 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.355814 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.356280 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.357683 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.358352 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.359980 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.360600 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.361577 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.362010 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.362564 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.363045 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.364029 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.364631 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.365440 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.365988 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.366834 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.367555 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.368583 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.369261 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.370287 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.370929 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.371494 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.372344 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.377312 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.378692 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.378931 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.379052 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.379183 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.379295 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386558 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386766 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386913 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386985 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387059 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387170 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387253 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387325 4881 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387386 4881 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387450 4881 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387517 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387589 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387655 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387718 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387801 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387877 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387942 4881 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388011 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388074 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388136 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388209 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388279 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388347 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388409 4881 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388468 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388528 4881 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388595 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388653 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388732 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386824 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386761 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.464497 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.482853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.482894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.482904 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.482920 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.482931 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.485079 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.491541 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.505321 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.505301 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: W0121 10:57:15.516486 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-d6436754ed5ebf6ee90a2869240521a066631a25d2a4b654baf6933f752a4400 WatchSource:0}: Error finding container d6436754ed5ebf6ee90a2869240521a066631a25d2a4b654baf6933f752a4400: Status 404 returned error can't find the container with id d6436754ed5ebf6ee90a2869240521a066631a25d2a4b654baf6933f752a4400 Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.522627 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: W0121 10:57:15.525526 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-80bcd2d67fc2ece3f2ca34c1d85b071ec2520641e88b7bbda14251a9114c6f17 WatchSource:0}: Error finding container 80bcd2d67fc2ece3f2ca34c1d85b071ec2520641e88b7bbda14251a9114c6f17: Status 404 returned error can't find the container with id 80bcd2d67fc2ece3f2ca34c1d85b071ec2520641e88b7bbda14251a9114c6f17 Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.538160 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.560763 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.576029 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.585765 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.585838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.585855 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.585875 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.585889 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.591483 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.606515 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.614342 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d6436754ed5ebf6ee90a2869240521a066631a25d2a4b654baf6933f752a4400"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.617369 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.621584 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"96404a7900d2841e95a8a7fcf083d01866feb5906844e55c1617d9f30bafd933"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.625004 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.625567 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.632993 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" exitCode=255 Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.633117 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.633191 4881 scope.go:117] "RemoveContainer" containerID="d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.634241 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:56:58Z\\\",\\\"message\\\":\\\"W0121 10:56:57.509137 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0121 10:56:57.509724 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768993017 cert, and key in /tmp/serving-cert-3442157096/serving-signer.crt, /tmp/serving-cert-3442157096/serving-signer.key\\\\nI0121 10:56:57.842593 1 observer_polling.go:159] Starting file observer\\\\nW0121 10:56:57.865464 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0121 10:56:57.865720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:56:57.868508 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3442157096/tls.crt::/tmp/serving-cert-3442157096/tls.key\\\\\\\"\\\\nF0121 10:56:58.276304 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.642425 4881 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.642839 4881 scope.go:117] "RemoveContainer" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.643136 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.648362 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"80bcd2d67fc2ece3f2ca34c1d85b071ec2520641e88b7bbda14251a9114c6f17"} Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.655635 4881 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.660661 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:56:58Z\\\",\\\"message\\\":\\\"W0121 10:56:57.509137 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0121 10:56:57.509724 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768993017 cert, and key in /tmp/serving-cert-3442157096/serving-signer.crt, /tmp/serving-cert-3442157096/serving-signer.key\\\\nI0121 10:56:57.842593 1 observer_polling.go:159] Starting file observer\\\\nW0121 10:56:57.865464 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0121 10:56:57.865720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:56:57.868508 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3442157096/tls.crt::/tmp/serving-cert-3442157096/tls.key\\\\\\\"\\\\nF0121 10:56:58.276304 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.680044 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.688876 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.688914 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.688926 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.688947 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.688959 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.692234 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.706386 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.721517 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.737838 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.753334 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.767290 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.791798 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.791842 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.791853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.791866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.791875 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.793219 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.793280 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.793321 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.793442 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.793492 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:16.793477034 +0000 UTC m=+24.053433503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.793716 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.793847 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:16.793772642 +0000 UTC m=+24.053729111 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.793907 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:16.793890705 +0000 UTC m=+24.053847454 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.893998 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.894044 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894201 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894212 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894267 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894283 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894234 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894348 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894362 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:16.894337108 +0000 UTC m=+24.154293577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894406 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:16.89438711 +0000 UTC m=+24.154343579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.895275 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.895342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.895360 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.895385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.895402 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.998042 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.998111 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.998126 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.998150 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.998164 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.038950 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-v4wxp"] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.039644 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fb4fr"] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.039931 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.039946 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.041685 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bx64f"] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.042299 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.042399 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.042851 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.042957 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043174 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043377 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043641 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043744 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043880 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043937 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.044092 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.046704 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.046896 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.046939 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.047488 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.047889 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fs42r"] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.048310 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.048414 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.048445 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.048733 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.048764 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.048769 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.048819 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.048958 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.048988 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.049478 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.049824 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.051382 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8sptw"] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.051766 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.053455 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.053666 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.053836 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.056271 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:56:58Z\\\",\\\"message\\\":\\\"W0121 10:56:57.509137 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0121 10:56:57.509724 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768993017 cert, and key in /tmp/serving-cert-3442157096/serving-signer.crt, /tmp/serving-cert-3442157096/serving-signer.key\\\\nI0121 10:56:57.842593 1 observer_polling.go:159] Starting file observer\\\\nW0121 10:56:57.865464 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0121 10:56:57.865720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:56:57.868508 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3442157096/tls.crt::/tmp/serving-cert-3442157096/tls.key\\\\\\\"\\\\nF0121 10:56:58.276304 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.068679 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.078408 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.091149 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.101396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.101437 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.101449 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.101466 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.101479 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.104147 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.105392 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:57:43.532369125 +0000 UTC Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.121107 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.135511 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.167527 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196669 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-netns\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196705 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-daemon-config\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196723 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6nwq\" (UniqueName: \"kubernetes.io/projected/c14980d7-1b3b-463b-8f57-f1e1afbd258c-kube-api-access-t6nwq\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196742 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-system-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196761 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cnibin\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196803 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196826 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196918 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197011 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197044 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-bin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197069 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-cni-binary-copy\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197124 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197144 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197187 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-system-cni-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197208 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3687b313-1df2-4274-80db-8c758b51bf2d-mcd-auth-proxy-config\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197227 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-conf-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197271 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f19f480e-331f-42f5-a3b6-fd0c6847b157-hosts-file\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197291 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjr7g\" (UniqueName: \"kubernetes.io/projected/f19f480e-331f-42f5-a3b6-fd0c6847b157-kube-api-access-hjr7g\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197318 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-kubelet\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hml99\" (UniqueName: \"kubernetes.io/projected/3687b313-1df2-4274-80db-8c758b51bf2d-kube-api-access-hml99\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197402 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197422 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-hostroot\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-k8s-cni-cncf-io\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197501 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197527 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197546 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197564 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-etc-kubernetes\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197598 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-binary-copy\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197640 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-socket-dir-parent\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197655 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-multus\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197691 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197709 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197731 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197762 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197802 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-os-release\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197824 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3687b313-1df2-4274-80db-8c758b51bf2d-rootfs\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197874 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197919 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197939 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197985 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198004 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-cnibin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-os-release\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198061 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198078 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-multus-certs\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198096 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kt6w\" (UniqueName: \"kubernetes.io/projected/09da9e14-f6d5-4346-a4a0-c17711e3b603-kube-api-access-7kt6w\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198113 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3687b313-1df2-4274-80db-8c758b51bf2d-proxy-tls\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.204135 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.204179 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.204189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.204204 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.204216 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.221806 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.250016 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.300532 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.300919 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-hostroot\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301050 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-hostroot\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301064 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301192 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301236 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-k8s-cni-cncf-io\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301275 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301302 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301328 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-etc-kubernetes\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301357 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-binary-copy\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301385 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-socket-dir-parent\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301442 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-multus\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301492 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301518 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301545 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301575 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-os-release\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301604 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3687b313-1df2-4274-80db-8c758b51bf2d-rootfs\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301636 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301663 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301712 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301752 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301821 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301849 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-cnibin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301878 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-os-release\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301902 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3687b313-1df2-4274-80db-8c758b51bf2d-proxy-tls\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301959 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-multus-certs\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301990 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kt6w\" (UniqueName: \"kubernetes.io/projected/09da9e14-f6d5-4346-a4a0-c17711e3b603-kube-api-access-7kt6w\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302020 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-netns\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302056 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-daemon-config\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6nwq\" (UniqueName: \"kubernetes.io/projected/c14980d7-1b3b-463b-8f57-f1e1afbd258c-kube-api-access-t6nwq\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302120 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302149 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-system-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302207 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cnibin\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302273 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302323 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-bin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302348 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-cni-binary-copy\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302390 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302443 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302474 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-system-cni-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302502 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3687b313-1df2-4274-80db-8c758b51bf2d-mcd-auth-proxy-config\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302564 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302592 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-conf-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302623 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f19f480e-331f-42f5-a3b6-fd0c6847b157-hosts-file\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302649 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjr7g\" (UniqueName: \"kubernetes.io/projected/f19f480e-331f-42f5-a3b6-fd0c6847b157-kube-api-access-hjr7g\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302672 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hml99\" (UniqueName: \"kubernetes.io/projected/3687b313-1df2-4274-80db-8c758b51bf2d-kube-api-access-hml99\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302703 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-kubelet\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302802 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-kubelet\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302879 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302923 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-k8s-cni-cncf-io\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302960 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303004 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303037 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-etc-kubernetes\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303237 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-netns\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303450 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-multus-certs\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303449 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303502 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303553 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-multus\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303563 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-socket-dir-parent\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303591 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303616 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303627 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cnibin\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303681 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303738 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-system-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303773 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303833 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303863 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-binary-copy\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303893 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-os-release\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303915 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303931 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-cnibin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304151 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304184 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-bin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304210 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-cni-binary-copy\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304232 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-conf-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304264 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3687b313-1df2-4274-80db-8c758b51bf2d-rootfs\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304263 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-system-cni-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304328 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f19f480e-331f-42f5-a3b6-fd0c6847b157-hosts-file\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304333 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304557 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304902 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-daemon-config\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304983 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3687b313-1df2-4274-80db-8c758b51bf2d-mcd-auth-proxy-config\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.305071 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.305137 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-os-release\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.305697 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.306410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.306441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.306470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.306488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.306500 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.308978 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.309511 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3687b313-1df2-4274-80db-8c758b51bf2d-proxy-tls\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.309707 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.309720 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.309859 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.310008 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.327688 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6nwq\" (UniqueName: \"kubernetes.io/projected/c14980d7-1b3b-463b-8f57-f1e1afbd258c-kube-api-access-t6nwq\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.328291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hml99\" (UniqueName: \"kubernetes.io/projected/3687b313-1df2-4274-80db-8c758b51bf2d-kube-api-access-hml99\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.346968 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kt6w\" (UniqueName: \"kubernetes.io/projected/09da9e14-f6d5-4346-a4a0-c17711e3b603-kube-api-access-7kt6w\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.347373 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjr7g\" (UniqueName: \"kubernetes.io/projected/f19f480e-331f-42f5-a3b6-fd0c6847b157-kube-api-access-hjr7g\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.362674 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.374228 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.388679 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc14980d7_1b3b_463b_8f57_f1e1afbd258c.slice/crio-8ee7f772c0c098089754b613d4fa12c49ea696eef205b96618c4a6e2b9db4ec5 WatchSource:0}: Error finding container 8ee7f772c0c098089754b613d4fa12c49ea696eef205b96618c4a6e2b9db4ec5: Status 404 returned error can't find the container with id 8ee7f772c0c098089754b613d4fa12c49ea696eef205b96618c4a6e2b9db4ec5 Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.399199 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.406398 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.411364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.411418 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.411431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.411446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.411455 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.415013 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.421483 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09da9e14_f6d5_4346_a4a0_c17711e3b603.slice/crio-d27b8e14d97b15bcb1aef61298f9b7ccb557ac67c51e1a710a96f9ba32b14f84 WatchSource:0}: Error finding container d27b8e14d97b15bcb1aef61298f9b7ccb557ac67c51e1a710a96f9ba32b14f84: Status 404 returned error can't find the container with id d27b8e14d97b15bcb1aef61298f9b7ccb557ac67c51e1a710a96f9ba32b14f84 Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.431549 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf19f480e_331f_42f5_a3b6_fd0c6847b157.slice/crio-02885e2b14dd2366051a366641ba0be9c0f8c8bd449f9e7f0dcd7029ec83464d WatchSource:0}: Error finding container 02885e2b14dd2366051a366641ba0be9c0f8c8bd449f9e7f0dcd7029ec83464d: Status 404 returned error can't find the container with id 02885e2b14dd2366051a366641ba0be9c0f8c8bd449f9e7f0dcd7029ec83464d Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.522205 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.522241 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.522249 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.522264 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.522275 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.677152 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.677187 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.677196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.677213 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.677223 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.678542 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8sptw" event={"ID":"f19f480e-331f-42f5-a3b6-fd0c6847b157","Type":"ContainerStarted","Data":"02885e2b14dd2366051a366641ba0be9c0f8c8bd449f9e7f0dcd7029ec83464d"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.679727 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerStarted","Data":"d27b8e14d97b15bcb1aef61298f9b7ccb557ac67c51e1a710a96f9ba32b14f84"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.680328 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerStarted","Data":"8ee7f772c0c098089754b613d4fa12c49ea696eef205b96618c4a6e2b9db4ec5"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.681891 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.684377 4881 scope.go:117] "RemoveContainer" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.684556 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.708025 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.711050 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.711120 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.723640 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.725338 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"30960a323fe252c1b69c590045a527a2b99ebff962e226251bc9c286c0dae8cf"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.784819 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.785061 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.785318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.785500 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.785521 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.785535 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.814206 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.814419 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:18.814394624 +0000 UTC m=+26.074351093 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.814587 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.814641 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.815164 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.815184 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.815231 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:18.815219994 +0000 UTC m=+26.075176643 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.815294 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:18.815284205 +0000 UTC m=+26.075240674 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.819493 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.858842 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.894625 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:56:58Z\\\",\\\"message\\\":\\\"W0121 10:56:57.509137 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0121 10:56:57.509724 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768993017 cert, and key in /tmp/serving-cert-3442157096/serving-signer.crt, /tmp/serving-cert-3442157096/serving-signer.key\\\\nI0121 10:56:57.842593 1 observer_polling.go:159] Starting file observer\\\\nW0121 10:56:57.865464 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0121 10:56:57.865720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:56:57.868508 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3442157096/tls.crt::/tmp/serving-cert-3442157096/tls.key\\\\\\\"\\\\nF0121 10:56:58.276304 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.896707 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.896858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.896934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.897036 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.897115 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.914593 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.915642 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.915869 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.915998 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.916052 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.916077 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.916205 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:18.9161744 +0000 UTC m=+26.176130869 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.917490 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.917506 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.917515 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.917542 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:18.917534203 +0000 UTC m=+26.177490672 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.939090 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.955942 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.978237 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.997483 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.999379 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.999419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.999430 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.999447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.999457 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.022280 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.035840 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.049429 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.063010 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.076851 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.089187 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.099671 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.101348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.101380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.101396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.101419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.101433 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.105768 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:43:24.897549068 +0000 UTC Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.118970 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.133794 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.146562 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.172218 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.186571 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.199291 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.203256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.203289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.203299 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.203312 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.203322 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.270026 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.270490 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.271837 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.275210 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.304579 4881 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.304696 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert podName:e8bb6d97-b3b8-4e31-b704-8e565385ab26 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:17.804669009 +0000 UTC m=+25.064625488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert") pod "ovnkube-node-bx64f" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.306022 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.306077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.306088 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.306109 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.306121 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.309774 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.309932 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.313445 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.333450 4881 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.333553 4881 projected.go:194] Error preparing data for projected volume kube-api-access-kz6fb for pod openshift-ovn-kubernetes/ovnkube-node-bx64f: failed to sync configmap cache: timed out waiting for the condition Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.333667 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb podName:e8bb6d97-b3b8-4e31-b704-8e565385ab26 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:17.833635349 +0000 UTC m=+25.093591818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kz6fb" (UniqueName: "kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb") pod "ovnkube-node-bx64f" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26") : failed to sync configmap cache: timed out waiting for the condition Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.355816 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.408657 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.408695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.408706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.408722 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.408734 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.511619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.511652 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.511663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.511679 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.511691 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.538801 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.613703 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.613738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.613746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.613761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.613769 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.646132 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.716230 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.716279 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.716289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.716306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.716319 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.728670 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.728707 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.729888 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerStarted","Data":"821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.731229 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerStarted","Data":"a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.735441 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8sptw" event={"ID":"f19f480e-331f-42f5-a3b6-fd0c6847b157","Type":"ContainerStarted","Data":"21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.746843 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.777556 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.797582 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.818273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.818315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.818326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.818343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.818355 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.828232 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.830551 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.835441 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.842748 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.855115 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.860849 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.867301 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.873306 4881 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.882983 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.889179 4881 csr.go:261] certificate signing request csr-s78ct is approved, waiting to be issued Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.897301 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.907845 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.918211 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.921340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.921361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.921368 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.921380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.921390 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.929850 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.931701 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.932980 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.945337 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.957510 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.982910 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.005591 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.012198 4881 csr.go:257] certificate signing request csr-s78ct is issued Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.023352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.023380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.023388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.023402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.023437 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.024898 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.034966 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.046491 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.074587 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.106513 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:01:23.240728372 +0000 UTC Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.108396 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.126318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.126351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.126359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.126372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.126384 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.134154 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.184247 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.228410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.228436 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.228445 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.228457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.228467 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.288768 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.309982 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.310147 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.310556 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.310628 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.317233 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.327287 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.331031 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.331062 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.331070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.331085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.331094 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.346020 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.433443 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.433478 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.433487 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.433501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.433540 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.535282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.535318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.535328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.535343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.535355 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.637524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.637558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.637568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.637582 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.637598 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.738839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.738875 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.738886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.738900 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.738911 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.739594 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"a06b3458bc6abd92816719b2c657b7e45cd4d79bda9753bf86e22c8e99a3027c"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.840627 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.840847 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.840924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.841009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.841092 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.841576 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.841831 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.841860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.842445 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.842577 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:22.842556017 +0000 UTC m=+30.102512546 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.842664 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.842721 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:22.842705911 +0000 UTC m=+30.102662470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.842866 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:22.842855104 +0000 UTC m=+30.102811573 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.942498 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.942557 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942693 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942694 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942713 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942725 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942732 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942739 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942808 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:22.942774715 +0000 UTC m=+30.202731204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942845 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:22.942836127 +0000 UTC m=+30.202792606 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.943409 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.943508 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.943591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.943671 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.943735 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.012932 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 10:52:18 +0000 UTC, rotation deadline is 2026-11-28 10:16:18.720862966 +0000 UTC Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.013249 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7463h18m59.707618703s for next certificate rotation Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.046091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.046122 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.046131 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.046147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.046160 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.107300 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:06:56.150130036 +0000 UTC Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.148618 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.148652 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.148662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.148677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.148687 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.250431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.250457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.250467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.250480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.250489 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.333439 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:19 crc kubenswrapper[4881]: E0121 10:57:19.333838 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.352601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.352632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.352641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.352656 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.352665 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.489891 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.489940 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.489950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.489967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.489978 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.593015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.593059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.593070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.593086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.593098 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.695350 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.695389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.695402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.695422 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.695436 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.745594 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd" exitCode=0 Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.745690 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.767133 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.786458 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.787521 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.788479 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48" exitCode=0 Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.788548 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.798905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.798952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.798969 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.798991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.799007 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.807460 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.822704 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.836598 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.856265 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.872515 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.890169 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.901561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.901594 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.901602 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.901615 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.901623 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.910538 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.926352 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.954197 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.975969 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.996531 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.013033 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.013080 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.013091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.013107 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.013120 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.015670 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.026729 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.044963 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.057125 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.074286 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.089328 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.101221 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.107904 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:15:34.523778449 +0000 UTC Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.112545 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.115455 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.115486 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.115495 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.115509 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.115519 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.126647 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.139542 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.150069 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.161154 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.174271 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.217829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.217852 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.217860 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.217872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.217880 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.310413 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:20 crc kubenswrapper[4881]: E0121 10:57:20.310816 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.310702 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:20 crc kubenswrapper[4881]: E0121 10:57:20.311057 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.320095 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.320354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.320561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.320766 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.320875 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.423196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.423556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.423756 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.423990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.424093 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.585661 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.585893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.585987 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.586068 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.586146 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.699985 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.700341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.700362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.700384 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.700401 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.795203 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.798000 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerStarted","Data":"0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.805852 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.805911 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.805923 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.805944 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.805963 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.815641 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.828836 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.843343 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.861486 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.877286 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908723 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908783 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908805 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908914 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.921052 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.966983 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.980299 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.004296 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.011868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.011919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.011931 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.011951 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.011967 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.015825 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.032255 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.043170 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.109085 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:11:25.393152712 +0000 UTC Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.114930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.114964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.114975 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.114994 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.115005 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.221190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.221220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.221227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.221240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.221248 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.312017 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:21 crc kubenswrapper[4881]: E0121 10:57:21.312154 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.322928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.322954 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.322962 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.322973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.322982 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.486007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.486034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.486044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.486059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.486071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.588679 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.588773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.588809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.588827 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.588839 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.697102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.697141 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.697153 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.697174 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.697185 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.801161 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.801244 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.801260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.801280 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.801298 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.807639 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.903737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.903779 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.903805 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.903824 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.903833 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.007361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.007429 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.007447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.007877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.007918 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.011506 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.012165 4881 scope.go:117] "RemoveContainer" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.012367 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.109642 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:48:35.057424569 +0000 UTC Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.110433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.110501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.110513 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.110526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.110535 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.214180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.214231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.214247 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.214272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.214290 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.263322 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tjwf8"] Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.263697 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.265998 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.266875 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.266973 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.267824 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.285420 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.310486 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.310656 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.310742 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.310813 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.314702 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.317103 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.317184 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.317206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.317235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.317257 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.331684 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.347043 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.363306 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.373725 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.383244 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-host\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.383308 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-serviceca\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.383370 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57d55\" (UniqueName: \"kubernetes.io/projected/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-kube-api-access-57d55\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.384723 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.399935 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.411360 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.419502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.419533 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.419541 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.419558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.419567 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.428359 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.445669 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.466274 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.481086 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.484746 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-serviceca\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.485042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57d55\" (UniqueName: \"kubernetes.io/projected/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-kube-api-access-57d55\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.485154 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-host\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.485279 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-host\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.486456 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-serviceca\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.500247 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.509963 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57d55\" (UniqueName: \"kubernetes.io/projected/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-kube-api-access-57d55\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.521669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.521706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.521718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.521734 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.521744 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.576590 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: W0121 10:57:22.593868 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf4f6fc0_ed4c_47b7_b2bc_8033980781a3.slice/crio-01617000387d477911d9cb738c195ac6bfacdc21c7a315e15ef50fc5fb308e58 WatchSource:0}: Error finding container 01617000387d477911d9cb738c195ac6bfacdc21c7a315e15ef50fc5fb308e58: Status 404 returned error can't find the container with id 01617000387d477911d9cb738c195ac6bfacdc21c7a315e15ef50fc5fb308e58 Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.624575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.624622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.624632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.624647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.624660 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.727762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.727799 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.727810 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.727826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.727836 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.813875 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.814222 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.815884 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tjwf8" event={"ID":"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3","Type":"ContainerStarted","Data":"01617000387d477911d9cb738c195ac6bfacdc21c7a315e15ef50fc5fb308e58"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.831301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.831361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.831372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.831391 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.831406 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.890047 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.890226 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.890290 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.890421 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:30.890384275 +0000 UTC m=+38.150340744 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.890442 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.890533 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:30.890526439 +0000 UTC m=+38.150482898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.890445 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.890676 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:30.890656172 +0000 UTC m=+38.150612831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.932365 4881 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 10:57:22 crc kubenswrapper[4881]: W0121 10:57:22.938035 4881 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:57:22 crc kubenswrapper[4881]: W0121 10:57:22.939297 4881 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:57:22 crc kubenswrapper[4881]: W0121 10:57:22.939497 4881 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:57:22 crc kubenswrapper[4881]: W0121 10:57:22.940136 4881 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.945251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.945275 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.945283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.945297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.945307 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.991876 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.991979 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992179 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992208 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992226 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992297 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:30.992273905 +0000 UTC m=+38.252230374 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992573 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992665 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992730 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992872 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:30.992848918 +0000 UTC m=+38.252805387 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.083009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.083106 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.083117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.083189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.083217 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.110560 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:24:37.429630896 +0000 UTC Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.185585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.185622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.185630 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.185643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.185653 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.289188 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.289260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.289276 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.289308 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.289325 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.309849 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:23 crc kubenswrapper[4881]: E0121 10:57:23.310057 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.333424 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.349594 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.367513 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.382274 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.391922 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.391973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.391987 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.392007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.392020 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.399391 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.424715 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.449376 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.478260 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.494332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.494388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.494415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.494449 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.494472 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.507850 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.551550 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.565937 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.580206 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.589698 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.596753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.596801 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.596813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.596831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.596842 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.602716 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.698645 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.698688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.698699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.698721 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.698731 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.801900 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.801943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.801953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.801968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.801977 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.822098 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756" exitCode=0 Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.822178 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.826807 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tjwf8" event={"ID":"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3","Type":"ContainerStarted","Data":"9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.838013 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.838062 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.847282 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.866417 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.873327 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.895405 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.904473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.904561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.904589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.904628 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.904652 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.911209 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.937933 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.956323 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.963577 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.966526 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.982195 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.997534 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.007345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.007371 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.007379 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.007393 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.007402 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.014754 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.027683 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.042821 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.055102 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.071287 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.073524 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.090254 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.098642 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.109733 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.110721 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:19:48.074328417 +0000 UTC Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.111318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.111362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.111381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.111405 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.111422 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.134062 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.155474 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.172778 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.187444 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.201388 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.214341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.214384 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.214396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.214464 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.214475 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.216304 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.229269 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.241884 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.253665 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.273267 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.292103 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.305010 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.310209 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.310217 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.310325 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.310427 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.317468 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.317520 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.317537 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.317556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.317573 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.420624 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.420702 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.420721 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.420746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.420766 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.525831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.525902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.525924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.525953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.525975 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.629826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.629893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.629917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.629949 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.629972 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.698761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.698873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.698898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.698927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.698947 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.729902 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.736220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.736492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.736995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.737313 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.737382 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.756927 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.762079 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.762252 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.762338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.762551 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.762660 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.782937 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.793566 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.793618 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.793637 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.793662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.793680 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.809551 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.813556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.813597 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.813612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.813630 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.813643 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.828261 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.828466 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.830660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.830699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.830711 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.830728 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.830739 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.844403 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248" exitCode=0 Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.844478 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.858412 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.877006 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.892807 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.907295 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.925942 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.933743 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.933778 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.933813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.934010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.934022 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.943137 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.958310 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.979738 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.993635 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.007260 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.033406 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.035663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.035694 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.035702 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.035716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.035725 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.048852 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.074830 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.088600 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.111951 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:09:46.212864965 +0000 UTC Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.138718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.138761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.138773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.138802 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.138812 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.240760 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.240858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.240877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.240899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.240917 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.311131 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:25 crc kubenswrapper[4881]: E0121 10:57:25.311242 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.342915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.342958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.342970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.343009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.343023 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.445480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.445517 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.445527 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.445543 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.445559 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.548096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.548170 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.548185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.548249 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.548264 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.650001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.650039 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.650048 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.650061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.650071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.752921 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.752978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.752990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.753008 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.753029 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.852696 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.854754 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.854801 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.854812 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.854825 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.854835 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.856555 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9" exitCode=0 Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.856622 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.876540 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.898077 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.912834 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.927895 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.942303 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.956150 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.957230 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.957301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.957311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.957347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.957358 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.975668 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.989645 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.006542 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.021103 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.042844 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.059928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.059974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.059984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.060001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.060017 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.060150 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.073367 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.085638 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.112162 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:47:07.34577816 +0000 UTC Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.162376 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.162407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.162417 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.162431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.162440 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.264926 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.264979 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.264988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.265004 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.265013 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.309818 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.309884 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:26 crc kubenswrapper[4881]: E0121 10:57:26.310100 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:26 crc kubenswrapper[4881]: E0121 10:57:26.310231 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.367526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.367570 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.367579 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.367595 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.367604 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.471964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.472034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.472053 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.472080 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.472098 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.574358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.574502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.574598 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.574670 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.574745 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.677227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.677263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.677270 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.677285 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.677294 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.779853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.779903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.779918 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.779946 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.779961 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.872942 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerStarted","Data":"c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.883114 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.883162 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.883175 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.883194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.883209 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.889425 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.903500 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.921321 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.939962 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.955846 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.968655 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.984555 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.986425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.986465 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.986477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.986497 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.986512 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.001908 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.017510 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.031872 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.049065 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.060897 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.078801 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.089776 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.089831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.089841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.089854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.089864 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.093304 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.112705 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:59:26.875524956 +0000 UTC Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.193009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.193066 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.193091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.193116 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.193132 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.296286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.296374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.296399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.296430 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.296447 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.309843 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:27 crc kubenswrapper[4881]: E0121 10:57:27.310025 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.398917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.398959 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.398970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.398988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.399000 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.501330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.501733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.501746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.501764 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.501780 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.608376 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.608445 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.608470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.608502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.608546 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.711895 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.711948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.711966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.711992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.712011 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.815532 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.815573 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.815587 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.815606 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.815622 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.881373 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.882257 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.882384 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.886881 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b" exitCode=0 Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.886944 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.897852 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.919431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.919490 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.919502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.919527 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.919544 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.920183 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.935302 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.955911 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.955992 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.964178 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.980637 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.996057 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.007221 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.023311 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.024531 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.024564 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.024599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.024616 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.024627 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.035395 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.050425 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.064679 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.078682 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.091706 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.106547 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.117510 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:49:55.041160393 +0000 UTC Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.120626 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.130623 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.130650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.130662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.130677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.130687 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.131767 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.145483 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.166447 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.179597 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.204218 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.216430 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.231151 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.232894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.232925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.232935 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.232952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.232965 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.246946 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.279061 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.289836 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.301392 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.310244 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.310301 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:28 crc kubenswrapper[4881]: E0121 10:57:28.310401 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:28 crc kubenswrapper[4881]: E0121 10:57:28.310493 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.316986 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336712 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336726 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336761 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336941 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.439301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.439336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.439347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.439362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.439372 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.542327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.542673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.542813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.542951 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.543064 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.646026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.646074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.646085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.646104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.646121 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.749433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.749497 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.749513 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.749552 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.749570 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.852538 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.852591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.852607 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.852631 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.852648 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.892288 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.956915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.956956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.956966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.956981 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.956994 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.061978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.062024 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.062035 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.062053 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.062064 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.117693 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:27:17.853793355 +0000 UTC Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.165143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.165196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.165233 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.165265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.165286 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.270168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.270233 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.270250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.270273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.270290 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.310373 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:29 crc kubenswrapper[4881]: E0121 10:57:29.310636 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.374026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.374439 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.374648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.374894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.375103 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.478647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.478689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.478700 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.478717 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.478729 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.489898 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth"] Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.490690 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.493008 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.493436 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.511096 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.525631 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.539982 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.551161 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.565227 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.578036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d379505c-c658-4dd5-b841-40c8443012c6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.578087 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.578106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.578132 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57krk\" (UniqueName: \"kubernetes.io/projected/d379505c-c658-4dd5-b841-40c8443012c6-kube-api-access-57krk\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581616 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581630 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581665 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581766 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.598212 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.611281 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.628965 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.642349 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.652920 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.666639 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.679490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57krk\" (UniqueName: \"kubernetes.io/projected/d379505c-c658-4dd5-b841-40c8443012c6-kube-api-access-57krk\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.679647 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d379505c-c658-4dd5-b841-40c8443012c6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.679750 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.679807 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.680773 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.680877 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.681442 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685197 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685573 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685540 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d379505c-c658-4dd5-b841-40c8443012c6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685675 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.698917 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.702611 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57krk\" (UniqueName: \"kubernetes.io/projected/d379505c-c658-4dd5-b841-40c8443012c6-kube-api-access-57krk\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.711238 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.788808 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.788844 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.788853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.788869 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.788878 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.808369 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.892020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.892501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.892675 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.892953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.893117 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.908164 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9" exitCode=0 Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.908347 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.910489 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" event={"ID":"d379505c-c658-4dd5-b841-40c8443012c6","Type":"ContainerStarted","Data":"940c09b091f4d8b17833fc9e9f36c4d8ff8768d518f48994774a58ed142f85da"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.910513 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.925474 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.942920 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.954526 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.982695 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998542 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998582 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998600 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998634 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.016753 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.032764 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.051729 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.068348 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.083657 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.096875 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.101283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.101321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.101331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.101348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.101360 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.109892 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.117964 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:32:01.827891641 +0000 UTC Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.128250 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.145979 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.158954 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.204030 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.204074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.204086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.204101 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.204113 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.310266 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.310356 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.310499 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.310637 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.318546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.318575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.318585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.318600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.318610 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.421988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.422056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.422074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.422100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.422120 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.524622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.524651 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.524659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.524680 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.524688 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.627932 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.627990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.628010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.628038 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.628056 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.731852 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.731937 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.731953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.731975 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.731988 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.867154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.867189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.867198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.867213 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.867221 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.894518 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.894684 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.894765 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.894961 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.895042 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:46.89501955 +0000 UTC m=+54.154976029 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.895312 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.895367 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:46.895353188 +0000 UTC m=+54.155309657 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.895437 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:46.89542912 +0000 UTC m=+54.155385589 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.938699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerStarted","Data":"fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.940378 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" event={"ID":"d379505c-c658-4dd5-b841-40c8443012c6","Type":"ContainerStarted","Data":"d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.940400 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" event={"ID":"d379505c-c658-4dd5-b841-40c8443012c6","Type":"ContainerStarted","Data":"51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.953162 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.963992 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.970095 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.970119 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.970129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.970145 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.970156 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.986049 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.996001 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.996080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.996188 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.996203 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.996213 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.996252 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:46.996239702 +0000 UTC m=+54.256196171 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.997531 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.997553 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.997562 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.997586 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:46.997577165 +0000 UTC m=+54.257533634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.005836 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.028549 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.059113 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073030 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073078 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073090 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073108 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073118 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073566 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.086474 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.100854 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.119914 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.132413 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:59:51.57649681 +0000 UTC Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.141432 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.156718 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.215477 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.234812 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.249962 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.261722 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.272797 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.288445 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.288496 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.288506 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.288526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.288537 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.312353 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:31 crc kubenswrapper[4881]: E0121 10:57:31.312466 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.358902 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.371585 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.390404 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.391886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.391910 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.391919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.391934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.391947 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.405707 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.418454 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.432337 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.446720 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.460380 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.473461 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.488735 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.502655 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.518140 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.533596 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.612588 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.612613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.612620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.612635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.612643 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.715813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.715868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.715887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.715915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.715933 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.894677 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dtv4t"] Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895200 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895248 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895268 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895280 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895347 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:31 crc kubenswrapper[4881]: E0121 10:57:31.895427 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.913425 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.932921 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.953358 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.967423 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.981023 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.995697 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.008213 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.021813 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.034323 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.047230 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.059547 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.075756 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.089515 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.110928 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.130123 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.140165 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.162424 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:38:09.531043099 +0000 UTC Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.164459 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlps\" (UniqueName: \"kubernetes.io/projected/3552adbd-011f-4552-9e69-233b92c554c8-kube-api-access-cqlps\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.164529 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.166769 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.166904 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.166979 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.167072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.167150 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.265930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqlps\" (UniqueName: \"kubernetes.io/projected/3552adbd-011f-4552-9e69-233b92c554c8-kube-api-access-cqlps\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.266387 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.266754 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.266958 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:32.766932189 +0000 UTC m=+40.026888678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.274072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.274563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.275198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.275352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.275479 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.298347 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqlps\" (UniqueName: \"kubernetes.io/projected/3552adbd-011f-4552-9e69-233b92c554c8-kube-api-access-cqlps\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.312398 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.312976 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.313661 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.313877 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.434042 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.434083 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.434094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.434111 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.434124 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.548858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.549664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.549745 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.549841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.549918 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.653024 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.653073 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.653085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.653107 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.653119 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.756507 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.756552 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.756563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.756577 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.756587 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.828686 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.828944 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.829042 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:33.829011254 +0000 UTC m=+41.088967723 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.859272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.859318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.859329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.859347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.859363 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.962303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.962345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.962354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.962372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.962385 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.065452 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.065498 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.065525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.065546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.065562 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.163612 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:54:31.547330546 +0000 UTC Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.169229 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.169271 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.169287 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.169353 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.169371 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.272795 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.272835 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.272843 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.272859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.272869 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.310488 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.310997 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:33 crc kubenswrapper[4881]: E0121 10:57:33.311117 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.311319 4881 scope.go:117] "RemoveContainer" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" Jan 21 10:57:33 crc kubenswrapper[4881]: E0121 10:57:33.311429 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.326022 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.338098 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.357235 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.369263 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.375467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.375779 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.375956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.376064 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.376174 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.390546 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.404828 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.415495 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.428620 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.440766 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.451895 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.466132 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.477778 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.478589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.478621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.478632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.478651 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.478663 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.489619 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.503122 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.515333 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.527194 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.585509 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.585599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.585622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.585654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.585693 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.687528 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.688016 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.688094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.688156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.688217 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.790995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.791286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.791370 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.791448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.791510 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.869122 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:33 crc kubenswrapper[4881]: E0121 10:57:33.869257 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:33 crc kubenswrapper[4881]: E0121 10:57:33.869312 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:35.869295708 +0000 UTC m=+43.129252187 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.893804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.893839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.893851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.893868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.893885 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.953958 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.955732 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.956069 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.968328 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.977215 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.992877 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:33.996698 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:33.996743 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:33.996755 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:33.996770 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:33.996800 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.007797 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.019265 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.028912 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.042944 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.053605 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.065112 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.075056 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.086229 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.099664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.099709 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.099720 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.099741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.099753 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.100856 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.115121 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.128945 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.148090 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.160613 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.164729 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:32:04.944078768 +0000 UTC Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.201764 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.201865 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.201878 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.201896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.201910 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.304696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.304734 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.304753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.304770 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.304799 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.309566 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.309702 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.309566 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.310122 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.406847 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.406889 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.406898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.406912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.406926 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.508843 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.508889 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.508901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.508916 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.508927 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.611934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.612005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.612020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.612037 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.612048 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.714204 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.714251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.714266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.714288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.714304 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.816408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.816461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.816473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.816494 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.816521 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.868737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.868797 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.868807 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.868821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.868830 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.883287 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.887393 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.887441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.887453 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.887470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.887482 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.906122 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.909536 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.909588 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.909598 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.909613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.909623 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.921037 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.925426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.925475 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.925488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.925507 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.925518 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.938390 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.942301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.942336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.942347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.942362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.942373 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.961703 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/0.log" Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.961941 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.962175 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.965737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.965769 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.965779 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.965809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.965818 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.967699 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5" exitCode=1 Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.967775 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.968573 4881 scope.go:117] "RemoveContainer" containerID="58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.981570 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.997975 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.009109 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.018427 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.030596 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.040896 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.051369 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.062587 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.068021 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.068066 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.068082 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.068105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.068120 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.072920 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.086341 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.101969 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.120369 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.133636 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.150363 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"rk-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:57:34.006073 6119 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:57:34.006081 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:57:34.006288 6119 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006459 6119 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006687 6119 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007039 6119 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007082 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:57:34.007124 6119 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007122 6119 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:57:34.007524 6119 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:57:34.007540 6119 factory.go:656] Stopping watch factory\\\\nI0121 10:57:34.007557 6119 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:57:34.007612 6119 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.161436 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.165701 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:35:01.503553354 +0000 UTC Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.171125 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.171170 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.171185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.171206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.171220 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.172299 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.273410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.273446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.273459 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.273473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.273482 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.309843 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.309928 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:35 crc kubenswrapper[4881]: E0121 10:57:35.310040 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:35 crc kubenswrapper[4881]: E0121 10:57:35.310181 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.375854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.375907 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.375923 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.375943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.375955 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.478915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.478965 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.478982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.479005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.479023 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.581182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.581227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.581241 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.581257 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.581269 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.683839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.683887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.683896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.683911 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.683921 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.786422 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.786477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.786487 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.786503 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.786513 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.886761 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:35 crc kubenswrapper[4881]: E0121 10:57:35.886951 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:35 crc kubenswrapper[4881]: E0121 10:57:35.887009 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:39.886991696 +0000 UTC m=+47.146948165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.888924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.888965 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.888982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.889015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.889036 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.981512 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/0.log" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.984863 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.985073 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.991080 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.991165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.991191 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.991243 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.991259 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.004749 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.020881 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.034242 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.043137 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.055357 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.065806 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.081388 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.093109 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.094052 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.094086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.094098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.094113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.094122 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.106910 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.122850 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.137431 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.149979 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.166405 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:00:06.753174748 +0000 UTC Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.169592 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"rk-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:57:34.006073 6119 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:57:34.006081 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:57:34.006288 6119 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006459 6119 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006687 6119 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007039 6119 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007082 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:57:34.007124 6119 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007122 6119 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:57:34.007524 6119 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:57:34.007540 6119 factory.go:656] Stopping watch factory\\\\nI0121 10:57:34.007557 6119 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:57:34.007612 6119 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.185169 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.196263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.196320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.196345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.196369 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.196399 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.197305 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.210548 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.298669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.298709 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.298722 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.298739 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.298750 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.309934 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:36 crc kubenswrapper[4881]: E0121 10:57:36.310027 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.309942 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:36 crc kubenswrapper[4881]: E0121 10:57:36.310136 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.401577 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.401636 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.401656 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.401684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.401701 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.503970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.504028 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.504044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.504063 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.504075 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.606683 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.606741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.606757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.606808 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.606822 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.709042 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.709094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.709104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.709121 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.709131 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.811843 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.811886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.811897 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.811911 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.811919 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.914292 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.914345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.914363 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.914384 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.914400 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.990451 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/1.log" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.991477 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/0.log" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.996345 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563" exitCode=1 Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.996413 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.996477 4881 scope.go:117] "RemoveContainer" containerID="58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.997891 4881 scope.go:117] "RemoveContainer" containerID="5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563" Jan 21 10:57:36 crc kubenswrapper[4881]: E0121 10:57:36.998219 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.017207 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.017266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.017284 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.017310 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.017325 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.018988 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.034623 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.054915 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.068151 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.086931 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.101168 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.113860 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.121708 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.121738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.121750 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.121768 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.121780 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.127082 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.139305 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.153911 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.167599 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:13:36.635350859 +0000 UTC Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.167766 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.187462 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.206926 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"rk-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:57:34.006073 6119 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:57:34.006081 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:57:34.006288 6119 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006459 6119 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006687 6119 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007039 6119 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007082 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:57:34.007124 6119 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007122 6119 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:57:34.007524 6119 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:57:34.007540 6119 factory.go:656] Stopping watch factory\\\\nI0121 10:57:34.007557 6119 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:57:34.007612 6119 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.224301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.224329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.224338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.224353 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.224362 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.225778 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.238562 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.254027 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.310642 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:37 crc kubenswrapper[4881]: E0121 10:57:37.310774 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.310643 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:37 crc kubenswrapper[4881]: E0121 10:57:37.311384 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.341866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.341903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.341918 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.341934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.341946 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.444893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.444950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.444961 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.444982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.444998 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.548278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.548323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.548338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.548362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.548379 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.651685 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.651742 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.651759 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.651819 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.651837 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.754447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.754527 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.754570 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.754596 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.754615 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.858144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.858192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.858203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.858220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.858233 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.960255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.960293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.960306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.960327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.960345 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.002169 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/1.log" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.062517 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.062565 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.062576 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.062591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.062602 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.165830 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.165918 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.166010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.166037 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.166055 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.168218 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:29:32.182314304 +0000 UTC Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.268973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.269015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.269026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.269043 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.269053 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.309852 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.309877 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:38 crc kubenswrapper[4881]: E0121 10:57:38.310007 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:38 crc kubenswrapper[4881]: E0121 10:57:38.310169 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.372382 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.372433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.372444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.372462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.372476 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.475407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.475487 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.475505 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.475532 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.475550 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.578442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.578518 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.578541 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.578572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.578590 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.681779 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.681927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.681946 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.681970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.681987 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.785212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.785263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.785279 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.785301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.785318 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.888733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.888867 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.888892 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.889368 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.889644 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.992903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.992956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.992984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.993007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.993022 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.095744 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.095825 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.095838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.095861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.095877 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.168335 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:47:24.708743692 +0000 UTC Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.198357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.198424 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.198443 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.198467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.198484 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.300589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.300640 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.300651 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.300670 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.300683 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.310176 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:39 crc kubenswrapper[4881]: E0121 10:57:39.310330 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.310178 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:39 crc kubenswrapper[4881]: E0121 10:57:39.310505 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.404729 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.404809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.404821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.404845 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.404858 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.507359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.507397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.507406 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.507420 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.507429 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.610261 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.610312 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.610340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.610358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.610367 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.712913 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.712945 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.712953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.712968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.712977 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.816066 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.816117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.816127 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.816143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.816155 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.919846 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.919899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.919909 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.919929 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.919939 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.924293 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:39 crc kubenswrapper[4881]: E0121 10:57:39.924428 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:39 crc kubenswrapper[4881]: E0121 10:57:39.924481 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:47.9244675 +0000 UTC m=+55.184423969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.022822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.022885 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.022902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.022924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.022941 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.126272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.126332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.126344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.126365 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.126378 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.169500 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:36:48.868631141 +0000 UTC Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.229198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.229249 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.229259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.229273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.229287 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.309731 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.309819 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:40 crc kubenswrapper[4881]: E0121 10:57:40.309886 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:40 crc kubenswrapper[4881]: E0121 10:57:40.309965 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.332010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.332054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.332066 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.332082 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.332094 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.434731 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.434813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.434831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.434849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.434860 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.537775 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.537849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.537864 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.537883 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.537894 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.641071 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.641136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.641154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.641179 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.641201 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.745078 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.745144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.745156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.745178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.745194 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.848765 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.848855 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.848866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.848883 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.848894 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.952187 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.952252 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.952266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.952287 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.952301 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.054894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.054950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.054967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.054988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.055005 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.158166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.158241 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.158265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.158298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.158321 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.170358 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:00:57.017517171 +0000 UTC Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.261408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.261647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.261657 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.261677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.261688 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.309900 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:41 crc kubenswrapper[4881]: E0121 10:57:41.310046 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.309901 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:41 crc kubenswrapper[4881]: E0121 10:57:41.310301 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.364142 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.364190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.364203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.364219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.364230 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.468203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.468286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.468301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.468326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.468338 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.572096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.572159 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.572172 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.572194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.572212 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.676255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.676328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.676363 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.676400 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.676422 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.779837 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.779896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.779913 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.779947 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.779963 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.882572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.882609 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.882619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.882633 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.882642 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.985967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.986071 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.986091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.986122 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.986142 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.089755 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.089861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.089886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.089917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.089939 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.171314 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:28:12.597517463 +0000 UTC Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.193540 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.193624 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.193641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.193667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.193687 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.296603 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.296665 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.296683 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.296706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.296723 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.310091 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.310120 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:42 crc kubenswrapper[4881]: E0121 10:57:42.310330 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:42 crc kubenswrapper[4881]: E0121 10:57:42.310453 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.399901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.399959 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.399996 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.400025 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.400049 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.502872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.502934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.502952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.502978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.503002 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.606327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.606404 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.606424 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.606451 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.606468 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.709267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.709312 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.709323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.709340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.709351 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.811171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.811212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.811221 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.811235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.811244 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.914864 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.914942 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.914964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.914994 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.915014 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.018319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.018402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.018439 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.018467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.018482 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.121854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.121902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.121912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.121931 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.121941 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.172066 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:54:26.802642859 +0000 UTC Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.224611 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.224700 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.224716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.224861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.224884 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.310610 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.310609 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:43 crc kubenswrapper[4881]: E0121 10:57:43.310849 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:43 crc kubenswrapper[4881]: E0121 10:57:43.311161 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.333938 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.333986 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.334001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.334020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.334032 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.338565 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.355527 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.372141 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.385323 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.404271 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.419705 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436370 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436508 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436936 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.450889 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.463321 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.477775 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.500221 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.515333 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.538520 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"rk-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:57:34.006073 6119 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:57:34.006081 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:57:34.006288 6119 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006459 6119 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006687 6119 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007039 6119 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007082 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:57:34.007124 6119 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007122 6119 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:57:34.007524 6119 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:57:34.007540 6119 factory.go:656] Stopping watch factory\\\\nI0121 10:57:34.007557 6119 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:57:34.007612 6119 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.540780 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.540857 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.540870 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.540890 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.540903 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.556906 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.572975 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.587050 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.644381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.644426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.644436 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.644454 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.644464 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.747632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.747680 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.747701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.747731 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.747752 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.850823 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.851638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.851849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.852063 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.852201 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.954699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.954780 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.954806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.954822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.954833 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.058133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.058163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.058171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.058183 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.058210 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.161275 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.161422 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.161441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.161463 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.161479 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.172634 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:02:53.702748256 +0000 UTC Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.264163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.264395 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.264546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.264648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.264736 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.310700 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.310735 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:44 crc kubenswrapper[4881]: E0121 10:57:44.310923 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:44 crc kubenswrapper[4881]: E0121 10:57:44.311066 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.367681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.367727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.367738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.367755 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.367767 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.470895 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.470968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.470991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.471029 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.471055 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.573325 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.573620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.573761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.573906 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.573992 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.676034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.676087 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.676105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.676127 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.676144 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.779184 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.779234 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.779244 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.779331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.779341 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.882804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.882852 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.882865 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.882884 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.882898 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.986976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.987033 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.987045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.987062 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.987072 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.057580 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.057644 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.057662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.057686 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.057705 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.083297 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.090192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.090237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.090254 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.090277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.090294 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.109510 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.115409 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.115469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.115495 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.115524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.115547 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.137205 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.143383 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.143448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.143472 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.143501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.143524 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.159327 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.164571 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.164654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.164672 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.164695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.164711 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.172943 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:17:40.62071874 +0000 UTC Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.179169 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.179388 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.182102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.182165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.182188 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.182219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.182243 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.285038 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.285096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.285113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.285136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.285155 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.310496 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.310626 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.310823 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.311045 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.388620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.388669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.388680 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.388699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.388711 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.492086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.492166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.492185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.492213 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.492231 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.595541 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.595620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.595634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.595656 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.595672 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.698932 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.698979 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.698990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.699011 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.699025 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.802328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.802383 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.802393 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.802414 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.802427 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.905133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.905194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.905209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.905231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.905244 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.008272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.008311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.008319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.008335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.008346 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.111558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.111634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.111648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.111671 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.111701 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.174180 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:08:34.821568812 +0000 UTC Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.214891 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.214948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.214964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.214982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.214994 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.241674 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.243033 4881 scope.go:117] "RemoveContainer" containerID="5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563" Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.243298 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.264651 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.294011 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.310048 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.310118 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.310235 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.310321 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.310368 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.318853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.318896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.318907 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.318925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.318937 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.347582 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.362399 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.378227 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.390910 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.407490 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.421833 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.423020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.423069 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.423084 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.423105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.423119 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.437538 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.451487 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.466217 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.481208 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.499599 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.512542 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.524426 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.526568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.526625 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.526638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.526659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.526669 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.630119 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.630159 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.630170 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.630186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.630196 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.732596 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.732643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.732683 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.732702 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.732712 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.836324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.836407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.836427 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.836455 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.836472 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.939678 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.939728 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.939745 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.939770 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.939829 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.991683 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.991859 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.992012 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:18.9919588 +0000 UTC m=+86.251915269 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.992143 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.992221 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.992254 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:18.992219356 +0000 UTC m=+86.252176015 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.992364 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.992463 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:18.992439562 +0000 UTC m=+86.252396071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.042074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.042140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.042163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.042192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.042213 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.093708 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094032 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094376 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094412 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094491 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:19.094466964 +0000 UTC m=+86.354423463 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094726 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094814 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094835 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094922 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:19.094894874 +0000 UTC m=+86.354851343 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.095029 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.146069 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.146140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.146165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.146192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.146212 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.174451 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:59:45.550568886 +0000 UTC Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.250188 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.250273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.250305 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.250340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.250364 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.310617 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.310685 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.310889 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.311119 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.354155 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.354288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.354316 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.354348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.354371 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.444210 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.457695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.457771 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.457829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.457863 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.457886 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.476706 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.498256 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.519958 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.541890 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.559297 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.560312 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.560387 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.560411 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.560440 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.560477 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.578299 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.600647 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.615827 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.633421 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.654709 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.663603 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.663655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.663666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.663686 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.663697 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.674346 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.691918 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.708751 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.726530 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.741389 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.762708 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.767351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.767388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.767427 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.767449 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.767465 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.870387 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.870457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.870469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.870492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.870507 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.974040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.974109 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.974129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.974153 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.974170 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.004833 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:48 crc kubenswrapper[4881]: E0121 10:57:48.005098 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:48 crc kubenswrapper[4881]: E0121 10:57:48.005254 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:04.005208952 +0000 UTC m=+71.265165461 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.077287 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.077330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.077339 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.077354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.077363 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.175510 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:33:59.246286538 +0000 UTC Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.180264 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.180303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.180317 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.180335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.180347 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.283349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.283412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.283435 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.283461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.283479 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.310260 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.310455 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:48 crc kubenswrapper[4881]: E0121 10:57:48.310614 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:48 crc kubenswrapper[4881]: E0121 10:57:48.310839 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.412495 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.412560 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.412577 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.412601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.412619 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.515586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.515629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.515643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.515663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.515674 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.618315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.618386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.618407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.618431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.618443 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.721228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.721301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.721324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.721349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.721367 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.823901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.823961 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.823978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.824000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.824017 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.927235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.927288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.927300 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.927323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.927335 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.030877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.031203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.031257 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.031290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.031312 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.133726 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.133834 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.133858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.133887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.133909 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.142566 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.157501 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.160001 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.174927 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.175938 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:29:54.958987606 +0000 UTC Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.190606 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.206477 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.221898 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236173 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236185 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236392 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.253164 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.268923 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.291831 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.308905 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.309946 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.309956 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:49 crc kubenswrapper[4881]: E0121 10:57:49.310123 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:49 crc kubenswrapper[4881]: E0121 10:57:49.310248 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.326415 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.338421 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.338461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.338470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.338485 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.338495 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.340088 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.356248 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.371201 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.383700 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.394076 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.441291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.441326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.441346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.441364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.441375 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.568548 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.568585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.568594 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.568608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.568620 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.672441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.672484 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.672496 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.672518 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.672531 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.774930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.774989 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.775005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.775047 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.775058 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.877479 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.877521 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.877537 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.877558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.877574 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.980385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.980444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.980465 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.980484 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.980498 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.083185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.083263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.083287 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.083322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.083344 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.176960 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:17:14.858632883 +0000 UTC Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.185840 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.185886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.185899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.185919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.185930 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.288316 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.288357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.288370 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.288385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.288397 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.310623 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:50 crc kubenswrapper[4881]: E0121 10:57:50.310745 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.310627 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:50 crc kubenswrapper[4881]: E0121 10:57:50.310883 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.391003 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.391038 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.391047 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.391059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.391068 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.494063 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.494108 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.494117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.494132 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.494149 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.596660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.596694 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.596703 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.596716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.596727 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.699228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.699270 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.699281 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.699299 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.699312 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.801892 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.801947 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.801958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.801974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.801985 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.904441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.904472 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.904480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.904492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.904501 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.007077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.007125 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.007138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.007156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.007194 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.109924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.109975 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.109988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.110011 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.110025 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.177766 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:45:34.335053704 +0000 UTC Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.214055 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.214092 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.214113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.214126 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.214135 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.310562 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.310723 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:51 crc kubenswrapper[4881]: E0121 10:57:51.310829 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:51 crc kubenswrapper[4881]: E0121 10:57:51.310951 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.320175 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.320244 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.320258 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.320296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.320311 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.422758 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.422829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.422839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.422854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.422862 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.525605 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.525642 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.525675 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.525688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.525696 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.629138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.629171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.629180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.629193 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.629202 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.731664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.731699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.731710 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.731725 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.731736 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.840182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.840219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.840232 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.840250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.840262 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.942748 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.942804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.942814 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.942827 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.942836 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.045366 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.045404 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.045412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.045425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.045433 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.148098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.148157 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.148169 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.148189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.148203 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.178538 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:14:12.839094096 +0000 UTC Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.252286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.252344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.252355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.252375 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.252388 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.310646 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.310733 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:52 crc kubenswrapper[4881]: E0121 10:57:52.310827 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:52 crc kubenswrapper[4881]: E0121 10:57:52.310914 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.355298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.355381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.355406 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.355431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.355451 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.457431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.457489 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.457498 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.457514 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.457525 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.560555 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.560622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.560655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.560684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.560708 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.663821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.663873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.663893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.663912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.663924 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.765709 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.765751 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.765762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.765804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.765817 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.867962 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.868024 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.868035 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.868051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.868062 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.970318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.970375 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.970388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.970415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.970447 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.071958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.071990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.072000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.072014 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.072025 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.173774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.173831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.173841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.173856 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.173866 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.178970 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:39:00.616304803 +0000 UTC Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.275605 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.275650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.275666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.275683 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.275700 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.310390 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.310484 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:53 crc kubenswrapper[4881]: E0121 10:57:53.310689 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:53 crc kubenswrapper[4881]: E0121 10:57:53.310842 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.329985 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.348956 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.365241 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379512 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379492 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379681 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.396865 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.413763 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.430576 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.445719 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.460329 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.476526 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.482817 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.482877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.482887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.482906 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.482921 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.491874 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.508343 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.533504 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.550877 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.568081 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.586689 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.587012 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.587056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.587071 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.587095 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.587113 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.602211 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.691340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.691446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.691463 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.691586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.691609 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.794389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.794423 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.794431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.794444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.794454 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.897121 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.897168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.897181 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.897201 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.897213 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.002037 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.002102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.002116 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.002139 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.002151 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.105344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.105396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.105412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.105431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.105445 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.179805 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:56:41.084971917 +0000 UTC Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.208552 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.208603 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.208614 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.208633 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.208645 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.309814 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.309852 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:54 crc kubenswrapper[4881]: E0121 10:57:54.309987 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:54 crc kubenswrapper[4881]: E0121 10:57:54.310120 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.311738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.311777 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.311806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.311825 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.311841 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.414883 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.414936 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.414949 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.414967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.414979 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.517734 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.517847 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.517861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.517890 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.517905 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.620695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.620753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.620773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.620823 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.620841 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.725081 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.725216 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.725231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.725253 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.725268 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.827938 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.828005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.828024 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.828051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.828071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.932055 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.932115 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.932128 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.932157 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.932175 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.037759 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.037826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.037840 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.037861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.037872 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.141050 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.141086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.141096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.141112 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.141121 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.180178 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:28:37.740777945 +0000 UTC Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.245075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.245112 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.245125 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.245147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.245160 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.287291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.287374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.287418 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.287460 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.287484 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.304909 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:55Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.310194 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.310265 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.310424 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.310692 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.315898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.315963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.315976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.316002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.316015 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.333388 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:55Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.338569 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.338637 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.338648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.338667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.338678 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.352397 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:55Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.356905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.356955 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.356966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.356991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.357007 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.371822 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:55Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.376921 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.376974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.376987 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.377007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.377022 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.393172 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:55Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.393301 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.395327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.395377 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.395386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.395407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.395418 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.499105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.499174 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.499186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.499209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.499227 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.602252 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.602306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.602321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.602343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.602359 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.705304 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.705353 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.705363 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.705381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.705391 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.809732 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.809815 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.809832 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.809854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.809867 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.913498 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.913554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.913568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.913620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.913635 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.016928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.016993 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.017009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.017035 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.017049 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.120630 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.120715 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.120733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.120754 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.120823 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.181393 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:21:43.306836786 +0000 UTC Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.228314 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.228359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.228369 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.228389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.228401 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.310585 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.310646 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:56 crc kubenswrapper[4881]: E0121 10:57:56.310731 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:56 crc kubenswrapper[4881]: E0121 10:57:56.310868 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.332932 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.332992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.333006 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.333033 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.333047 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.435718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.435762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.435774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.435812 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.435825 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.539137 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.539210 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.539226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.539251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.539265 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.642933 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.642984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.642995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.643015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.643030 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.745690 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.745745 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.745758 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.745806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.745819 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.851235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.851290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.851300 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.851315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.851325 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.954504 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.954586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.954601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.954628 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.954642 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.058746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.058888 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.059204 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.059303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.059343 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.162920 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.162998 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.163009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.163039 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.163054 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.182206 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:41:34.915775908 +0000 UTC Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.266258 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.266320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.266334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.266358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.266370 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.312999 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.313210 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:57 crc kubenswrapper[4881]: E0121 10:57:57.313378 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:57 crc kubenswrapper[4881]: E0121 10:57:57.313685 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.369239 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.369281 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.369290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.369311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.369322 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.473223 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.473300 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.473318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.473357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.473392 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.577194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.577260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.577272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.577290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.577303 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.680970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.681018 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.681028 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.681229 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.681240 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.784072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.784154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.784195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.784228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.784245 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.887286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.887348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.887358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.887381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.887392 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.990901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.990958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.990968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.990994 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.991007 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.094409 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.094454 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.094465 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.094484 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.094496 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.183160 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:06:23.006003288 +0000 UTC Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.197274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.197322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.197334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.197353 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.197363 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.300612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.300663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.300674 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.300691 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.300704 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.310094 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.310235 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:58 crc kubenswrapper[4881]: E0121 10:57:58.310348 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:58 crc kubenswrapper[4881]: E0121 10:57:58.310923 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.311242 4881 scope.go:117] "RemoveContainer" containerID="5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.403599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.403645 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.403660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.403681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.403698 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.507265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.507317 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.507331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.507351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.507363 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.610415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.610465 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.610477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.610496 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.610509 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.714133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.714189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.714206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.714231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.714249 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.817081 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.817124 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.817138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.817156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.817168 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.921269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.921329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.921344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.921369 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.921381 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.024468 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.024521 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.024531 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.024550 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.024562 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.093208 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/1.log" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.096212 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.096973 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.115452 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.127894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.128408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.128421 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.128442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.128454 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.132623 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.147645 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.167759 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.183826 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:08:41.238398932 +0000 UTC Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.186529 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.215114 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.229441 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.231274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.231341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.231358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.231382 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.231396 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.241364 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.258967 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.276451 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.289529 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.301323 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.310809 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:59 crc kubenswrapper[4881]: E0121 10:57:59.310989 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.311305 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:59 crc kubenswrapper[4881]: E0121 10:57:59.311452 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.320336 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.334740 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.334823 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.334838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.334857 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.334868 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.335097 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.348300 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.370413 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.395109 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.458087 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.458138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.458148 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.458168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.458181 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.563558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.563621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.563635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.563660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.563670 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.666988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.667043 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.667054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.667070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.667108 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.769923 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.769978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.769989 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.770010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.770050 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.873721 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.873774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.873801 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.873819 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.873830 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.980226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.980272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.980282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.980298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.980309 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.084205 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.084234 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.084242 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.084256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.084266 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.184209 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:54:49.103031506 +0000 UTC Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.188355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.188392 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.188428 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.188446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.188458 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.291110 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.291226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.291240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.291265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.291281 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.310417 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:00 crc kubenswrapper[4881]: E0121 10:58:00.310542 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.310726 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:00 crc kubenswrapper[4881]: E0121 10:58:00.310772 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.394986 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.395059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.395077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.395100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.395115 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.498557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.498612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.498624 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.498644 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.498660 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.602220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.602263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.602274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.602291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.602302 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.704455 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.704482 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.704492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.704506 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.704513 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.808218 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.808266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.808277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.808296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.808309 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.911359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.911430 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.911445 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.911466 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.911483 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.014549 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.014594 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.014604 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.014622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.014632 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.117165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.117209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.117222 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.117239 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.117249 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.122225 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/2.log" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.123007 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/1.log" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.126075 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2" exitCode=1 Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.126127 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.126179 4881 scope.go:117] "RemoveContainer" containerID="5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.127103 4881 scope.go:117] "RemoveContainer" containerID="ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2" Jan 21 10:58:01 crc kubenswrapper[4881]: E0121 10:58:01.127297 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.145327 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.161560 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.174801 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.184394 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:51:05.655376468 +0000 UTC Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.190864 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.205565 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.221107 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.221161 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.221172 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.221193 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.221211 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.222697 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.234857 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.246074 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.261036 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.275184 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.288552 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.300158 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.310119 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.310115 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:01 crc kubenswrapper[4881]: E0121 10:58:01.310301 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:01 crc kubenswrapper[4881]: E0121 10:58:01.310468 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.318062 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.323774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.323868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.323880 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.323905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.323923 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.335455 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.354208 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.369963 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.405149 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.427352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.427400 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.427420 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.427438 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.427449 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.530228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.530310 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.530323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.530344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.530357 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.634750 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.634876 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.634888 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.634913 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.634927 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.738022 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.738075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.738088 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.738110 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.738122 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.841118 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.841147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.841156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.841168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.841178 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.943524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.943584 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.943597 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.943614 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.943626 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.046488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.046536 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.046546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.046564 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.046574 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.132210 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/2.log" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.149375 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.149416 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.149425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.149439 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.149450 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.185245 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:20:52.539561661 +0000 UTC Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.253006 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.253082 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.253093 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.253115 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.253135 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.309714 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.309777 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:02 crc kubenswrapper[4881]: E0121 10:58:02.309975 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:02 crc kubenswrapper[4881]: E0121 10:58:02.310180 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.356442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.356500 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.356512 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.356534 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.356551 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.459144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.459180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.459192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.459206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.459215 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.562347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.562403 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.562413 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.562433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.562446 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.665212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.665255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.665266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.665286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.665296 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.768284 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.768322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.768331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.768347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.768358 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.871583 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.871615 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.871624 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.871640 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.871650 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.974810 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.974851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.974861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.974878 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.974890 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.078632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.078677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.078688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.078705 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.078716 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.182060 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.182112 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.182124 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.182145 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.182158 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.186104 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:41:18.601177065 +0000 UTC Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.284976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.285026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.285038 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.285057 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.285071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.310919 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:03 crc kubenswrapper[4881]: E0121 10:58:03.311070 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.311568 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:03 crc kubenswrapper[4881]: E0121 10:58:03.311652 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.327255 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.343098 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.357256 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.371618 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.386803 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.388815 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.388866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.388883 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.388969 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.388988 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.404804 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.423853 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.444901 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.459287 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.475091 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.486494 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.491615 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.491669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.491679 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.491699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.491711 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.499621 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.513859 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.531230 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.546405 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.561030 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.577434 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.595100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.595153 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.595166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.595190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.595205 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.698391 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.698484 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.698508 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.698541 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.698567 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.801267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.801322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.801331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.801349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.801364 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.904641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.904696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.904706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.904727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.904740 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.007599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.007655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.007666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.007687 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.007702 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.076838 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:04 crc kubenswrapper[4881]: E0121 10:58:04.077214 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:58:04 crc kubenswrapper[4881]: E0121 10:58:04.077305 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.0772816 +0000 UTC m=+103.337238069 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.110307 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.110368 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.110382 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.110407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.110422 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.187155 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:53:28.710416456 +0000 UTC Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.214524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.214626 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.214639 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.214660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.214674 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.310203 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:04 crc kubenswrapper[4881]: E0121 10:58:04.310425 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.310523 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:04 crc kubenswrapper[4881]: E0121 10:58:04.310676 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.318362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.318402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.318415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.318436 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.318448 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.421479 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.421583 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.421593 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.421608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.421617 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.524389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.524444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.524457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.524475 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.524485 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.627670 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.627726 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.627739 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.627762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.627775 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.730926 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.730964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.730976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.730992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.731004 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.834804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.834860 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.834873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.834893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.834906 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.937770 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.937839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.937851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.937871 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.937883 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.040734 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.040778 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.040821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.040837 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.040848 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.143854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.143909 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.143919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.143943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.143955 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.187576 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:54:54.862743303 +0000 UTC Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.247005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.247051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.247061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.247078 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.247087 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.310266 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.310343 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.310477 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.310612 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.351380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.351458 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.351476 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.351502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.351518 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.454292 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.454335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.454347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.454363 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.454375 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.556842 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.556912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.556933 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.556963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.556980 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.560881 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.560934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.560943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.560961 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.560979 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.578277 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:05Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.582002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.582358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.582469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.582584 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.582686 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.596805 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:05Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.603129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.603178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.603194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.603215 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.603235 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.623232 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:05Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.628323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.628392 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.628406 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.628425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.628438 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.646352 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:05Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.651111 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.651161 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.651172 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.651190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.651201 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.667381 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:05Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.667978 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.670015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.670041 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.670070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.670086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.670096 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.774057 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.774104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.774115 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.774130 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.774142 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.876608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.876667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.876681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.876703 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.876716 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.980525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.980592 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.980607 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.980629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.980660 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.084452 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.084504 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.084515 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.084537 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.084562 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187625 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187670 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187732 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:11:22.371403159 +0000 UTC Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.292203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.292248 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.292263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.292285 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.292305 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.309877 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:06 crc kubenswrapper[4881]: E0121 10:58:06.310038 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.310320 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:06 crc kubenswrapper[4881]: E0121 10:58:06.310410 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.395114 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.395159 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.395220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.395241 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.395253 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.497894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.497963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.497978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.497999 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.498013 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.601684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.601759 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.601772 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.601804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.601817 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.706208 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.706303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.706324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.706356 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.706377 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.809140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.809195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.809211 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.809232 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.809244 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.914886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.914942 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.914952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.914974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.914989 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.018122 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.018183 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.018261 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.018285 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.018300 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.121426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.121481 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.121496 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.121518 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.121533 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.188417 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:13:35.991515569 +0000 UTC Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.224120 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.224180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.224198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.224226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.224248 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.309747 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.309871 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:07 crc kubenswrapper[4881]: E0121 10:58:07.309996 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:07 crc kubenswrapper[4881]: E0121 10:58:07.310107 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.327076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.327113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.327123 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.327136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.327149 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.430898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.430960 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.430976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.431005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.431019 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.534595 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.534648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.534667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.534687 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.534702 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.638741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.638807 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.638817 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.638836 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.638850 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.743100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.743149 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.743160 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.743180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.743193 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.845984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.846056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.846069 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.846094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.846109 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.949026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.949072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.949084 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.949104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.949119 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.052920 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.052988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.053006 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.053046 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.053063 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.156371 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.156417 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.156432 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.156462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.156477 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.188730 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:55:24.625785758 +0000 UTC Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.259712 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.259760 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.259774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.259807 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.259823 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.309658 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:08 crc kubenswrapper[4881]: E0121 10:58:08.309849 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.310377 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:08 crc kubenswrapper[4881]: E0121 10:58:08.310883 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.363236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.363313 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.363332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.363357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.363373 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.468254 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.468329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.468354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.468386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.468435 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.576278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.576346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.576365 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.576388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.576404 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.681677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.681762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.681772 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.681832 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.681845 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.785254 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.785299 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.785314 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.785330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.785344 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.890124 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.890181 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.890199 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.890221 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.890235 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.993533 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.993586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.993600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.993619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.993631 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.097442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.097543 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.097563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.097590 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.097609 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.166830 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/0.log" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.166897 4881 generic.go:334] "Generic (PLEG): container finished" podID="09da9e14-f6d5-4346-a4a0-c17711e3b603" containerID="821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb" exitCode=1 Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.166974 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerDied","Data":"821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.167754 4881 scope.go:117] "RemoveContainer" containerID="821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.188586 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.189577 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:12:26.801613395 +0000 UTC Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.200246 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.200298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.200310 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.200334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.200347 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.210617 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.230004 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:08Z\\\",\\\"message\\\":\\\"2026-01-21T10:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419\\\\n2026-01-21T10:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419 to /host/opt/cni/bin/\\\\n2026-01-21T10:57:23Z [verbose] multus-daemon started\\\\n2026-01-21T10:57:23Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.247817 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.260526 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.277006 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.293048 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.303489 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.303551 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.303566 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.303584 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.303594 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.307682 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.309925 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.309922 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:09 crc kubenswrapper[4881]: E0121 10:58:09.310057 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:09 crc kubenswrapper[4881]: E0121 10:58:09.310130 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.324896 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.389282 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.405432 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.405469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.405480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.405493 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.405503 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.406449 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.424514 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.436728 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.450454 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.464431 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.475663 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.486721 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.507988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.508022 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.508032 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.508048 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.508060 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.611011 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.611079 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.611096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.611117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.611133 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.716664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.716711 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.716720 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.716733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.716742 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.818608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.818673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.818682 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.818695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.818703 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.922316 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.922362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.922374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.922392 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.922402 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.025736 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.025817 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.025831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.025848 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.025857 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.129881 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.129940 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.129964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.129995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.130026 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.175284 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/0.log" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.175345 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerStarted","Data":"e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.189945 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:44:40.005019262 +0000 UTC Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.201064 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.219688 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.232573 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.232619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.232631 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.232647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.232657 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.237452 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.252586 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.272963 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.288960 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.307626 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:08Z\\\",\\\"message\\\":\\\"2026-01-21T10:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419\\\\n2026-01-21T10:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419 to /host/opt/cni/bin/\\\\n2026-01-21T10:57:23Z [verbose] multus-daemon started\\\\n2026-01-21T10:57:23Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.309733 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:10 crc kubenswrapper[4881]: E0121 10:58:10.309955 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.310137 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:10 crc kubenswrapper[4881]: E0121 10:58:10.310238 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.324873 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.335280 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.335336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.335348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.335373 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.335386 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.342537 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.357546 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.375615 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.395069 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.410549 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.437564 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.438525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.438600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.438621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.438646 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.438663 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.455939 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.477758 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.493625 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.542675 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.542752 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.542762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.542803 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.542817 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.645935 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.646001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.646019 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.646044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.646062 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.749502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.749588 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.749607 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.749635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.749654 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.854259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.854311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.854328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.854349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.854366 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.957381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.957422 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.957437 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.957453 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.957464 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.060613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.060660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.060672 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.060688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.060700 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.163054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.163092 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.163100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.163113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.163123 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.190570 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:48:32.519156667 +0000 UTC Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.266601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.266648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.266659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.266705 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.266720 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.310715 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.310871 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:11 crc kubenswrapper[4881]: E0121 10:58:11.310913 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:11 crc kubenswrapper[4881]: E0121 10:58:11.311080 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.372098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.372163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.372174 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.372192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.372204 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.476259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.476324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.476337 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.476359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.476371 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.579340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.579396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.579408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.579428 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.579441 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.682619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.682667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.682692 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.682712 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.682724 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.786886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.787014 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.787085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.787120 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.787143 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.891456 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.891528 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.891546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.891572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.891594 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.995485 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.995520 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.995529 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.995543 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.995553 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.099067 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.099143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.099164 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.099194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.099212 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.191693 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:36:44.713958976 +0000 UTC Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.202514 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.202570 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.202580 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.202599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.202611 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.307222 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.307291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.307317 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.307342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.307359 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.310071 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.310145 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:12 crc kubenswrapper[4881]: E0121 10:58:12.310387 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:12 crc kubenswrapper[4881]: E0121 10:58:12.310547 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.329422 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.411646 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.411706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.411725 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.411752 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.411770 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.515218 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.515279 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.515296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.515321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.515338 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.618480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.618526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.618543 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.618564 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.618581 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.722236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.722305 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.722324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.722351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.722369 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.826283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.826336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.826347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.826368 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.826382 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.929856 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.929905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.929917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.929934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.929943 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.033105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.033180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.033189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.033215 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.033239 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.140534 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.140600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.140612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.140631 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.140645 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.192828 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:24:06.121710163 +0000 UTC Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.243557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.243606 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.243618 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.243636 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.243649 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.310525 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.310653 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:13 crc kubenswrapper[4881]: E0121 10:58:13.310852 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:13 crc kubenswrapper[4881]: E0121 10:58:13.311004 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.311742 4881 scope.go:117] "RemoveContainer" containerID="ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2" Jan 21 10:58:13 crc kubenswrapper[4881]: E0121 10:58:13.311929 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.327122 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.342583 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.347223 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.347462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.348240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.348332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.348893 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.356280 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9987a1-d9f5-467c-82b2-533a714c4c62\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf7bf06a11465e04a80fe7ae667f9c15741137062514a621955622d2b339dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.374234 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.390539 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.408117 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:08Z\\\",\\\"message\\\":\\\"2026-01-21T10:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419\\\\n2026-01-21T10:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419 to /host/opt/cni/bin/\\\\n2026-01-21T10:57:23Z [verbose] multus-daemon started\\\\n2026-01-21T10:57:23Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.433074 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452048 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452062 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452073 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452951 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.467279 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.485459 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.504916 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.526482 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.541418 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.554362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.554396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.554407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.554424 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.554437 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.555233 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.566404 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.582924 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.599599 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.613568 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.624348 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.634022 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.643983 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9987a1-d9f5-467c-82b2-533a714c4c62\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf7bf06a11465e04a80fe7ae667f9c15741137062514a621955622d2b339dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.656670 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.657067 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.657136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.657147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.657162 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.657171 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.674179 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.690841 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:08Z\\\",\\\"message\\\":\\\"2026-01-21T10:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419\\\\n2026-01-21T10:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419 to /host/opt/cni/bin/\\\\n2026-01-21T10:57:23Z [verbose] multus-daemon started\\\\n2026-01-21T10:57:23Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.717492 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.734036 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.749608 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.759949 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.759990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.760000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.760013 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.760021 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.766847 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.779948 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.795359 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.812053 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.824531 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.835054 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.850825 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.862547 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.862599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.862613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.862635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.862650 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.864017 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.878297 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.965547 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.965621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.965639 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.965665 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.965682 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.068843 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.068912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.068932 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.068958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.068975 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.171524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.171575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.171592 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.171616 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.171636 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.193839 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:55:32.885581457 +0000 UTC Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.274146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.274180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.274190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.274207 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.274218 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.310071 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:14 crc kubenswrapper[4881]: E0121 10:58:14.310257 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.310699 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:14 crc kubenswrapper[4881]: E0121 10:58:14.310774 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.376501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.376542 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.376551 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.376565 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.376574 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.480202 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.480260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.480271 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.480290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.480303 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.583684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.583728 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.583741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.583829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.583853 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.688097 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.688178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.688227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.688274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.688285 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.793346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.793517 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.793596 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.793653 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.793683 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.899374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.899428 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.899441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.899458 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.899471 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.002912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.002959 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.002971 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.002998 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.003013 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.106325 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.106400 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.106420 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.106494 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.106513 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.194515 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:20:43.357645818 +0000 UTC Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.209398 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.209447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.209460 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.209477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.209489 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.310391 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:15 crc kubenswrapper[4881]: E0121 10:58:15.310678 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.310834 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:15 crc kubenswrapper[4881]: E0121 10:58:15.311130 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.312154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.312192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.312202 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.312215 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.312226 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.415236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.415319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.415344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.415374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.415398 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.518079 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.518127 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.518138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.518155 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.518167 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.622593 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.622634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.622645 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.622666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.622677 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.726246 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.726325 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.726342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.726367 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.726384 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.829702 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.829773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.829831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.829855 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.829873 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.934608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.934650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.934662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.934681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.934692 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.977149 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.977209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.977220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.977237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.977252 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: E0121 10:58:15.993595 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:15Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.998638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.998677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.998689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.998706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.998717 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.016409 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.021098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.021143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.021154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.021171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.021182 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.037209 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.041716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.041757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.041767 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.041808 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.041818 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.057506 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.061876 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.061952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.061969 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.061991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.062006 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.077843 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.078053 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.079998 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.080040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.080058 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.080078 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.080103 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.182967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.183070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.183092 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.183118 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.183137 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.195648 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:05:18.113764085 +0000 UTC Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.286325 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.286403 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.286492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.286559 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.286578 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.310144 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.310215 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.310396 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.310578 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.389557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.389650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.389671 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.389698 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.389716 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.492899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.492947 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.493077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.493098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.493109 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.596163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.596240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.596265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.596297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.596320 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.699511 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.699574 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.699597 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.699625 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.699644 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.802704 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.803049 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.803061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.803077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.803089 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.906915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.906993 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.907016 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.907048 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.907071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.010212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.010291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.010315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.010348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.010373 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.114184 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.114242 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.114264 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.114293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.114314 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.196386 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:40:40.432416603 +0000 UTC Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.217267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.217344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.217368 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.217397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.217422 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.310455 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.310455 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:17 crc kubenswrapper[4881]: E0121 10:58:17.310712 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:17 crc kubenswrapper[4881]: E0121 10:58:17.310919 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.320146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.320248 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.320278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.320348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.320378 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.425521 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.425604 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.425636 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.425669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.425695 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.528611 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.528667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.528681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.528698 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.528710 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.632893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.632963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.632980 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.633005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.633021 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.736554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.736623 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.736641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.736666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.736684 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.840822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.840910 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.840934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.840967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.840989 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.943592 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.943859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.943873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.943891 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.943903 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.046361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.046432 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.046451 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.046477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.046497 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.149872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.149941 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.149962 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.149991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.150014 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.197371 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:07:56.493725846 +0000 UTC Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.253270 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.253318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.253334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.253359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.253373 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.310324 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.310421 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:18 crc kubenswrapper[4881]: E0121 10:58:18.310493 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:18 crc kubenswrapper[4881]: E0121 10:58:18.310612 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.356257 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.356310 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.356327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.356352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.356370 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.459261 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.459357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.459379 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.459406 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.459424 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.562164 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.562208 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.562219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.562236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.562284 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.665110 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.665196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.665219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.665250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.665272 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.768355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.768401 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.768415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.768435 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.768448 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.872454 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.872525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.872542 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.872572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.872591 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.975606 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.975664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.975677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.975698 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.975710 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.066615 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.066743 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.066821 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:59:23.066741869 +0000 UTC m=+150.326698368 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.066967 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.067020 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.067074 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.067140 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:59:23.067114108 +0000 UTC m=+150.327070607 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.067169 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:59:23.067156879 +0000 UTC m=+150.327113378 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.078103 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.078177 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.078212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.078243 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.078269 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.168191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.168376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168460 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168513 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168534 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168575 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168605 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168624 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168706 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:59:23.168682963 +0000 UTC m=+150.428639472 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168824 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:59:23.168760585 +0000 UTC m=+150.428717084 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.181182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.181240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.181259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.181282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.181302 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.197875 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:36:19.839832633 +0000 UTC Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.284585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.284649 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.284666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.284689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.285039 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.309856 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.309930 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.310032 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.310302 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.389076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.389144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.389161 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.389182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.389197 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.492273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.492342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.492366 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.492396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.492419 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.595568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.595657 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.595724 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.595821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.595849 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.699266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.699323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.699339 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.699361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.699380 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.802590 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.802659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.802676 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.802706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.802723 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.906098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.906171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.906195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.906224 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.906245 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.009300 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.009329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.009337 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.009355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.009374 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.113111 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.113283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.113294 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.113319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.113333 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.198554 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:50:55.116003222 +0000 UTC Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.215269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.215319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.215328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.215346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.215359 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.309924 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.310019 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:20 crc kubenswrapper[4881]: E0121 10:58:20.310079 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:20 crc kubenswrapper[4881]: E0121 10:58:20.310244 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.317877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.317938 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.317957 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.317980 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.318000 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.421574 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.421648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.421668 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.421696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.421753 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.525426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.525513 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.525544 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.525576 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.525599 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.628200 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.628277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.628295 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.628322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.628340 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.731220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.731293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.731313 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.731338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.731356 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.835147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.835227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.835253 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.835288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.835315 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.938957 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.939002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.939015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.939034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.939046 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.042285 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.042447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.042472 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.042502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.042522 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.146460 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.146516 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.146534 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.146556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.146574 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.199019 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:52:56.714836852 +0000 UTC Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.249767 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.249905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.249925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.249952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.249971 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.311121 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:21 crc kubenswrapper[4881]: E0121 10:58:21.311334 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.311605 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:21 crc kubenswrapper[4881]: E0121 10:58:21.312285 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.335181 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.352861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.352924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.352942 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.352972 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.352991 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.455930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.455991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.456003 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.456023 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.456036 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.559918 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.559967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.559976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.559991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.560001 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.662655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.662720 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.662735 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.662757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.662771 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.766611 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.766688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.766707 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.766732 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.766750 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.870558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.870631 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.870643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.870661 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.870675 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.973447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.973526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.973536 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.973555 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.973570 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.076462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.076516 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.076528 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.076546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.076559 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.180311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.180367 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.180377 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.180397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.180409 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.199797 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:43:30.833902988 +0000 UTC Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.284159 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.284231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.284250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.284276 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.284295 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.310210 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.310258 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:22 crc kubenswrapper[4881]: E0121 10:58:22.310361 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:22 crc kubenswrapper[4881]: E0121 10:58:22.310522 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.388323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.388374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.388385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.388412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.388425 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.491258 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.491328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.491345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.491374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.491394 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.595204 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.595280 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.595297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.595323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.595341 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.699263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.699321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.699338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.699364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.699385 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.802126 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.802193 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.802209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.802225 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.802237 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.906797 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.906880 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.906903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.906948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.906964 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.010356 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.010404 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.010419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.010441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.010457 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.114086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.114137 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.114151 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.114171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.114184 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.200092 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:27:49.131611695 +0000 UTC Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.217056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.217094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.217104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.217116 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.217125 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.310252 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:23 crc kubenswrapper[4881]: E0121 10:58:23.310350 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.310425 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:23 crc kubenswrapper[4881]: E0121 10:58:23.310768 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.319929 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.319985 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.320002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.320026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.320047 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.331332 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.353335 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.368845 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.396364 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18076c9a-f18b-4640-a048-68b6dbbfa85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfeb13ada78bc1504e657a94ab793ae27d4dbd9f333df47b951323f4e642e869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c05c062aefb9117f9f961f35221b8fa36b3374a184edcedea404d33539be0b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c96476b642e401c90a3f6810ea1624e2914188ba139b9303b963f1d5bc1f30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc29934ce0927ee4fdd2c97ca3bbbcaaf6287060d05447572edeefa8a66af25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c781ff2e87fbae055bac0e3f8f77e2eeee8aa4e38c83ff4b49645798949c550c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a0569ab7ed4586aadd7deab6398db98bfc9a6afd3903d5466c05021a41632a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a0569ab7ed4586aadd7deab6398db98bfc9a6afd3903d5466c05021a41632a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae46ac7909a717555defd27b6fa785f9c7f927fd7806c7941529c2e64ee3700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dae46ac7909a717555defd27b6fa785f9c7f927fd7806c7941529c2e64ee3700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b3e4e88955652dacaa965ab4ff099595a6bb920836bfd4ad703984e00029b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3e4e88955652dacaa965ab4ff099595a6bb920836bfd4ad703984e00029b98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.424168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.424237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.424259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.424289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.424311 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.425886 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.475309 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.492237 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.506921 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.522618 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9987a1-d9f5-467c-82b2-533a714c4c62\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf7bf06a11465e04a80fe7ae667f9c15741137062514a621955622d2b339dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.526841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.526887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.526903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.526924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.526938 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.540143 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.557335 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.573207 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:08Z\\\",\\\"message\\\":\\\"2026-01-21T10:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419\\\\n2026-01-21T10:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419 to /host/opt/cni/bin/\\\\n2026-01-21T10:57:23Z [verbose] multus-daemon started\\\\n2026-01-21T10:57:23Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.586028 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.597013 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.616182 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.630252 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.630290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.630303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.630330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.630343 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.637159 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.663495 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.682263 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.712212 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.732744 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.732806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.732820 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.732838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.732850 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.835566 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.835699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.835723 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.835745 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.835762 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.938298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.938343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.938357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.938373 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.938385 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.040602 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.040650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.040661 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.040677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.040690 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.143554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.143617 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.143634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.143657 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.143675 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.201113 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:14:58.781548819 +0000 UTC Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.246809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.246841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.246851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.246867 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.246879 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.310640 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.310685 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:24 crc kubenswrapper[4881]: E0121 10:58:24.310900 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:24 crc kubenswrapper[4881]: E0121 10:58:24.311076 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.312556 4881 scope.go:117] "RemoveContainer" containerID="ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.352176 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.352219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.352236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.352260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.352277 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.455690 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.455741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.455755 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.455773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.455806 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.558571 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.558643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.558673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.558707 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.558728 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.661438 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.661504 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.661520 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.661547 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.661568 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.764655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.764716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.764735 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.764759 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.764777 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.868921 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.868987 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.869005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.869030 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.869052 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.972263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.972334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.972355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.972378 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.972398 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.075425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.075490 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.075507 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.075533 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.075550 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.181939 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.182272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.182281 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.182298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.182308 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.202308 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:21:01.798624178 +0000 UTC Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.235746 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/2.log" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.239852 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.241779 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.259342 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.283097 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.286473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.286514 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.286526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.286540 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.286553 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.299209 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.309985 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:25 crc kubenswrapper[4881]: E0121 10:58:25.310181 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.310487 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:25 crc kubenswrapper[4881]: E0121 10:58:25.310600 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.315975 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.334798 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.362726 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.376959 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.388935 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.388984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.388996 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.389012 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.389023 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.392190 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.407996 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.421367 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.492809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.492860 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.492873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.492895 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.492912 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.493699 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18076c9a-f18b-4640-a048-68b6dbbfa85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfeb13ada78bc1504e657a94ab793ae27d4dbd9f333df47b951323f4e642e869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c05c062aefb9117f9f961f35221b8fa36b3374a184edcedea404d33539be0b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c96476b642e401c90a3f6810ea1624e2914188ba139b9303b963f1d5bc1f30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc29934ce0927ee4fdd2c97ca3bbbcaaf6287060d05447572edeefa8a66af25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c781ff2e87fbae055bac0e3f8f77e2eeee8aa4e38c83ff4b49645798949c550c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a0569ab7ed4586aadd7deab6398db98bfc9a6afd3903d5466c05021a41632a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a0569ab7ed4586aadd7deab6398db98bfc9a6afd3903d5466c05021a41632a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae46ac7909a717555defd27b6fa785f9c7f927fd7806c7941529c2e64ee3700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dae46ac7909a717555defd27b6fa785f9c7f927fd7806c7941529c2e64ee3700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b3e4e88955652dacaa965ab4ff099595a6bb920836bfd4ad703984e00029b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3e4e88955652dacaa965ab4ff099595a6bb920836bfd4ad703984e00029b98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.528007 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.527940963 podStartE2EDuration="1m11.527940963s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:25.525703518 +0000 UTC m=+92.785660007" watchObservedRunningTime="2026-01-21 10:58:25.527940963 +0000 UTC m=+92.787897432" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.595708 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.595826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.595842 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.595863 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.595877 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.596628 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fs42r" podStartSLOduration=71.59661583 podStartE2EDuration="1m11.59661583s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:25.595201545 +0000 UTC m=+92.855158014" watchObservedRunningTime="2026-01-21 10:58:25.59661583 +0000 UTC m=+92.856572299" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.623852 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" podStartSLOduration=69.623821518 podStartE2EDuration="1m9.623821518s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:25.611112275 +0000 UTC m=+92.871068744" watchObservedRunningTime="2026-01-21 10:58:25.623821518 +0000 UTC m=+92.883777987" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.655448 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.655425294 podStartE2EDuration="13.655425294s" podCreationTimestamp="2026-01-21 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:25.637537305 +0000 UTC m=+92.897493784" watchObservedRunningTime="2026-01-21 10:58:25.655425294 +0000 UTC m=+92.915381763" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.655554 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.655550597 podStartE2EDuration="1m11.655550597s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:25.654289096 +0000 UTC m=+92.914245565" watchObservedRunningTime="2026-01-21 10:58:25.655550597 +0000 UTC m=+92.915507066" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.699075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.699129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.699140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.699156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.699167 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.802207 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.802288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.802302 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.802326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.802342 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.905115 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.905184 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.905197 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.905220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.905233 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.008725 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.008773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.008804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.008828 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.008841 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:26Z","lastTransitionTime":"2026-01-21T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.112180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.112227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.112238 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.112256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.112266 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:26Z","lastTransitionTime":"2026-01-21T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202545 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202582 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202594 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:26Z","lastTransitionTime":"2026-01-21T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202760 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:04:47.605299495 +0000 UTC Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.230461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.230507 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.230520 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.230538 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.230547 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:26Z","lastTransitionTime":"2026-01-21T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.287617 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz"] Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.288258 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.298196 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.298199 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.298210 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.299679 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.310374 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.310431 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:26 crc kubenswrapper[4881]: E0121 10:58:26.310639 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:26 crc kubenswrapper[4881]: E0121 10:58:26.310928 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.311876 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.311849297 podStartE2EDuration="37.311849297s" podCreationTimestamp="2026-01-21 10:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.310244457 +0000 UTC m=+93.570200946" watchObservedRunningTime="2026-01-21 10:58:26.311849297 +0000 UTC m=+93.571805766" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.342941 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8sptw" podStartSLOduration=72.34292151 podStartE2EDuration="1m12.34292151s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.342242953 +0000 UTC m=+93.602199422" watchObservedRunningTime="2026-01-21 10:58:26.34292151 +0000 UTC m=+93.602877969" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.375846 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.376010 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4642cf40-137f-4659-9190-d17f93aac69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.376085 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4642cf40-137f-4659-9190-d17f93aac69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.376300 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.376381 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4642cf40-137f-4659-9190-d17f93aac69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.381343 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.381331443 podStartE2EDuration="5.381331443s" podCreationTimestamp="2026-01-21 10:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.381243081 +0000 UTC m=+93.641199560" watchObservedRunningTime="2026-01-21 10:58:26.381331443 +0000 UTC m=+93.641287912" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.417753 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tjwf8" podStartSLOduration=71.417729437 podStartE2EDuration="1m11.417729437s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.416767824 +0000 UTC m=+93.676724303" watchObservedRunningTime="2026-01-21 10:58:26.417729437 +0000 UTC m=+93.677685906" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.472576 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" podStartSLOduration=72.472558763 podStartE2EDuration="1m12.472558763s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.471772154 +0000 UTC m=+93.731728643" watchObservedRunningTime="2026-01-21 10:58:26.472558763 +0000 UTC m=+93.732515222" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477034 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4642cf40-137f-4659-9190-d17f93aac69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4642cf40-137f-4659-9190-d17f93aac69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477105 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477131 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4642cf40-137f-4659-9190-d17f93aac69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477163 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477224 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.478099 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.479079 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4642cf40-137f-4659-9190-d17f93aac69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.488966 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4642cf40-137f-4659-9190-d17f93aac69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.489824 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podStartSLOduration=72.489777347 podStartE2EDuration="1m12.489777347s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.489401467 +0000 UTC m=+93.749357956" watchObservedRunningTime="2026-01-21 10:58:26.489777347 +0000 UTC m=+93.749733816" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.508467 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4642cf40-137f-4659-9190-d17f93aac69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.525652 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podStartSLOduration=71.525631777 podStartE2EDuration="1m11.525631777s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.525225077 +0000 UTC m=+93.785181566" watchObservedRunningTime="2026-01-21 10:58:26.525631777 +0000 UTC m=+93.785588246" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.605721 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: W0121 10:58:26.623327 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4642cf40_137f_4659_9190_d17f93aac69f.slice/crio-3422ee5a6ecb64985051553fca84ce5f8a4ff36db844b9adb5c24988571cc841 WatchSource:0}: Error finding container 3422ee5a6ecb64985051553fca84ce5f8a4ff36db844b9adb5c24988571cc841: Status 404 returned error can't find the container with id 3422ee5a6ecb64985051553fca84ce5f8a4ff36db844b9adb5c24988571cc841 Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.859197 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dtv4t"] Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.859297 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:26 crc kubenswrapper[4881]: E0121 10:58:26.859412 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.203563 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:07:49.843039945 +0000 UTC Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.204162 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.212096 4881 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.253682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" event={"ID":"4642cf40-137f-4659-9190-d17f93aac69f","Type":"ContainerStarted","Data":"8b1a0621c0a5179658baaa5fc83f26a2cce4e83d35c2291f306deffc9f29be15"} Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.253736 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" event={"ID":"4642cf40-137f-4659-9190-d17f93aac69f","Type":"ContainerStarted","Data":"3422ee5a6ecb64985051553fca84ce5f8a4ff36db844b9adb5c24988571cc841"} Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.270765 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" podStartSLOduration=73.270742847 podStartE2EDuration="1m13.270742847s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:27.270100562 +0000 UTC m=+94.530057051" watchObservedRunningTime="2026-01-21 10:58:27.270742847 +0000 UTC m=+94.530699336" Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.310251 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:27 crc kubenswrapper[4881]: E0121 10:58:27.310464 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:28 crc kubenswrapper[4881]: I0121 10:58:28.310921 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:28 crc kubenswrapper[4881]: I0121 10:58:28.310937 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:28 crc kubenswrapper[4881]: I0121 10:58:28.310938 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:28 crc kubenswrapper[4881]: E0121 10:58:28.311260 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:28 crc kubenswrapper[4881]: E0121 10:58:28.311319 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:28 crc kubenswrapper[4881]: E0121 10:58:28.311073 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.311905 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:29 crc kubenswrapper[4881]: E0121 10:58:29.312019 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.537152 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.537435 4881 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.584590 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.585663 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.585804 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rslv2"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.586889 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.587136 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cclnc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.587983 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-svmbc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.588266 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.588843 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.589075 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.589895 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.590904 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.592170 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.592588 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wjlxh"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.592619 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.593207 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.593697 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.594070 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.596561 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zjqz6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.597006 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.597456 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.597875 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.597946 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.598303 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.601924 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.602745 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.603067 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.603353 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.605719 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.609265 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n2h44"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.610416 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.610860 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.610980 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.612617 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jvxv4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.613195 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.613692 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.614095 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.614175 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.614763 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.619466 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.621618 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-wrqpb"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.622182 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.624993 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.625271 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.625626 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.625684 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626049 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626224 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626504 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626819 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626973 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.627176 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626893 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.627508 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.627706 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.628018 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.628023 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.628320 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.628820 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.628979 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.629174 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.629221 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.629260 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.629287 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.637347 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.639403 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h97cd"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.640437 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.640739 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.641118 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.642187 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.642498 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.642602 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.642219 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.643383 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.644078 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.655904 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.663619 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.663989 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.664117 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.664269 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.664356 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.666109 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.666313 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.666530 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.666607 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.666956 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.667041 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.667128 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.667326 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.667445 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.667460 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.668756 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-v7wnh"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.669451 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.677527 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.677825 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.677843 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.677994 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678082 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678288 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678376 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678451 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678584 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678749 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679011 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679098 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679223 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679309 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679391 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679529 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679661 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679743 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.680903 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681087 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681164 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681319 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681394 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681435 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681526 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681575 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681616 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681691 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681727 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681737 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681847 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681987 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682000 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682079 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682104 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682156 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682242 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679661 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682342 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.683146 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-whh46"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.683573 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j4s5w"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.684062 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.684630 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.684708 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.684946 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.685133 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.694031 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.694297 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.694859 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.696055 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.696180 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.696312 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.696533 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.697038 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.697866 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.698828 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.699382 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.699961 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.699974 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711432 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n2nt\" (UniqueName: \"kubernetes.io/projected/52d94566-7844-4414-bf48-9122c2207dd6-kube-api-access-2n2nt\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711472 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-config\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711510 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711531 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-image-import-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-machine-approver-tls\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711618 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-etcd-client\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711636 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711656 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-config\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-service-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711705 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-trusted-ca-bundle\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711722 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f74368-89f6-44fb-aaa2-9159a217b4d7-serving-cert\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711741 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-trusted-ca\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711770 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrlb\" (UniqueName: \"kubernetes.io/projected/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-kube-api-access-mhrlb\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711804 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cldhz\" (UniqueName: \"kubernetes.io/projected/628cb8f4-a587-498f-9398-403e0af5eec4-kube-api-access-cldhz\") pod \"downloads-7954f5f757-wrqpb\" (UID: \"628cb8f4-a587-498f-9398-403e0af5eec4\") " pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711823 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/863eda44-9a47-42de-b2de-49234ac647f0-metrics-tls\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711844 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z962\" (UniqueName: \"kubernetes.io/projected/537a87a4-8f58-441f-9199-62c5849c693c-kube-api-access-4z962\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711865 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c4b3cb-57d3-4282-93fe-16cfab039277-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711888 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72phf\" (UniqueName: \"kubernetes.io/projected/29dca8bf-7bce-455b-812f-fca8861518ca-kube-api-access-72phf\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711911 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6mtd\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-kube-api-access-r6mtd\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711932 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-metrics-tls\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711948 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-config\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711968 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711989 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b745a377-4575-45fb-a206-ea4754ecff76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712011 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4f8p\" (UniqueName: \"kubernetes.io/projected/8465162e-dd9f-45b4-83a6-94666ac2b87b-kube-api-access-d4f8p\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712030 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712048 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxmk\" (UniqueName: \"kubernetes.io/projected/863eda44-9a47-42de-b2de-49234ac647f0-kube-api-access-8hxmk\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712067 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kgjc\" (UniqueName: \"kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712090 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-bound-sa-token\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712109 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-service-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712124 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-serving-cert\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712146 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gck6q\" (UniqueName: \"kubernetes.io/projected/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-kube-api-access-gck6q\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712166 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712186 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712207 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-node-pullsecrets\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712223 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit-dir\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712243 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712262 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blg69\" (UniqueName: \"kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712280 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e1443d-dd18-4343-b200-756f9511c163-serving-cert\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712298 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-trusted-ca\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712319 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-auth-proxy-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712345 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712364 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-stats-auth\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-metrics-certs\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712424 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d68a50c-6a38-4aba-bb02-9a25712d2212-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712446 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-images\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8zf\" (UniqueName: \"kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712513 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a87a4-8f58-441f-9199-62c5849c693c-serving-cert\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712546 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712580 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712600 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e960def-7bc7-4041-94dc-8ccea63f8bb8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712643 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshkt\" (UniqueName: \"kubernetes.io/projected/f1f74368-89f6-44fb-aaa2-9159a217b4d7-kube-api-access-qshkt\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712673 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghfkh\" (UniqueName: \"kubernetes.io/projected/3201b51c-af63-40e7-8037-9e581791d495-kube-api-access-ghfkh\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712697 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712729 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712750 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-serving-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716664 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-serving-cert\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716730 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29dca8bf-7bce-455b-812f-fca8861518ca-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716900 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29dca8bf-7bce-455b-812f-fca8861518ca-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716924 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czg99\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-kube-api-access-czg99\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716952 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-client\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716977 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-default-certificate\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716998 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717083 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717103 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-dir\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717125 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52d94566-7844-4414-bf48-9122c2207dd6-service-ca-bundle\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717149 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/537a87a4-8f58-441f-9199-62c5849c693c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717278 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717302 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717325 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e960def-7bc7-4041-94dc-8ccea63f8bb8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717345 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ppts\" (UniqueName: \"kubernetes.io/projected/96e1443d-dd18-4343-b200-756f9511c163-kube-api-access-7ppts\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717365 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-config\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717399 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-encryption-config\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7rfj\" (UniqueName: \"kubernetes.io/projected/b745a377-4575-45fb-a206-ea4754ecff76-kube-api-access-p7rfj\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717448 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-encryption-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e960def-7bc7-4041-94dc-8ccea63f8bb8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717515 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717542 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-serving-cert\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717581 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-policies\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717599 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717620 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d68a50c-6a38-4aba-bb02-9a25712d2212-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717674 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717690 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8465162e-dd9f-45b4-83a6-94666ac2b87b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717721 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-client\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717737 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5477x\" (UniqueName: \"kubernetes.io/projected/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-kube-api-access-5477x\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4b3cb-57d3-4282-93fe-16cfab039277-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717820 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4vrg\" (UniqueName: \"kubernetes.io/projected/27c4b3cb-57d3-4282-93fe-16cfab039277-kube-api-access-z4vrg\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717843 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-config\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.719964 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.721222 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.721997 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.727025 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.728282 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.733407 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.740032 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.740121 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.763394 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.764415 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.764414 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.765018 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.765382 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-llgd7"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.767895 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.768853 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769239 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769384 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769451 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769616 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769744 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769883 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.770111 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769390 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.771157 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.772568 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.773260 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.773935 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.772770 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f877x"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.775723 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.781659 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.781710 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.782427 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.782710 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.783318 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.785577 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.786581 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.787665 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.789370 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wjlxh"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.791055 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kl9j4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.791845 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.792649 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-468h5"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.793516 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.800891 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rslv2"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.800969 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cclnc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.800986 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.800998 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n2h44"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.801008 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h97cd"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.801017 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.804650 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.805894 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.807886 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-svmbc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.812872 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.813461 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.816127 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.817397 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.818765 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820211 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820286 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-auth-proxy-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820323 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820349 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d68a50c-6a38-4aba-bb02-9a25712d2212-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820391 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-images\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820422 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8zf\" (UniqueName: \"kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820455 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkjt\" (UniqueName: \"kubernetes.io/projected/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-kube-api-access-9pkjt\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820478 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a87a4-8f58-441f-9199-62c5849c693c-serving-cert\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820495 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820512 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e960def-7bc7-4041-94dc-8ccea63f8bb8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820528 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshkt\" (UniqueName: \"kubernetes.io/projected/f1f74368-89f6-44fb-aaa2-9159a217b4d7-kube-api-access-qshkt\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820548 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghfkh\" (UniqueName: \"kubernetes.io/projected/3201b51c-af63-40e7-8037-9e581791d495-kube-api-access-ghfkh\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820565 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820584 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-serving-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-serving-cert\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820620 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czg99\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-kube-api-access-czg99\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820639 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29dca8bf-7bce-455b-812f-fca8861518ca-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820658 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbqhc\" (UniqueName: \"kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820675 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-client\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820707 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52d94566-7844-4414-bf48-9122c2207dd6-service-ca-bundle\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820721 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e960def-7bc7-4041-94dc-8ccea63f8bb8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820736 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ppts\" (UniqueName: \"kubernetes.io/projected/96e1443d-dd18-4343-b200-756f9511c163-kube-api-access-7ppts\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820750 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-encryption-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820767 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820803 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8jwm\" (UniqueName: \"kubernetes.io/projected/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-kube-api-access-l8jwm\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820821 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f997bb38-4f6e-495f-acb8-e8e0d1730947-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820837 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820852 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820883 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-serving-cert\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820905 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820949 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-profile-collector-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820966 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d68a50c-6a38-4aba-bb02-9a25712d2212-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820985 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7470431a-2a31-41ae-b021-510ae5e3c505-proxy-tls\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821003 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-995tp\" (UniqueName: \"kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821025 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-client\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-config\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821056 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-config\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821072 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821089 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgqgf\" (UniqueName: \"kubernetes.io/projected/86ac2c23-01e6-4a22-a79d-77a3269fb5a0-kube-api-access-wgqgf\") pod \"migrator-59844c95c7-qpdx4\" (UID: \"86ac2c23-01e6-4a22-a79d-77a3269fb5a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821146 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5qlp\" (UniqueName: \"kubernetes.io/projected/f997bb38-4f6e-495f-acb8-e8e0d1730947-kube-api-access-n5qlp\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821179 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821195 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-machine-approver-tls\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821213 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-etcd-client\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821228 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-config\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821243 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-service-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821260 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821280 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821297 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821316 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-trusted-ca\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821334 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrlb\" (UniqueName: \"kubernetes.io/projected/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-kube-api-access-mhrlb\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821365 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cldhz\" (UniqueName: \"kubernetes.io/projected/628cb8f4-a587-498f-9398-403e0af5eec4-kube-api-access-cldhz\") pod \"downloads-7954f5f757-wrqpb\" (UID: \"628cb8f4-a587-498f-9398-403e0af5eec4\") " pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821393 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72phf\" (UniqueName: \"kubernetes.io/projected/29dca8bf-7bce-455b-812f-fca8861518ca-kube-api-access-72phf\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821448 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c510b795-d750-4f94-bc9a-88ba625bd556-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821466 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b745a377-4575-45fb-a206-ea4754ecff76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821483 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hxmk\" (UniqueName: \"kubernetes.io/projected/863eda44-9a47-42de-b2de-49234ac647f0-kube-api-access-8hxmk\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821515 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kgjc\" (UniqueName: \"kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821532 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gck6q\" (UniqueName: \"kubernetes.io/projected/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-kube-api-access-gck6q\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821547 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821562 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkzm\" (UniqueName: \"kubernetes.io/projected/0007a585-5b17-44bd-89b8-2d1d233a03d4-kube-api-access-gzkzm\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821578 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-node-pullsecrets\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821596 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e1443d-dd18-4343-b200-756f9511c163-serving-cert\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821628 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821738 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c8e7010-8b57-47ed-9270-417650a2a7c5-proxy-tls\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821826 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-auth-proxy-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821962 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c510b795-d750-4f94-bc9a-88ba625bd556-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821995 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-stats-auth\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.822644 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-metrics-certs\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823076 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823095 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823124 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823139 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823457 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823735 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f997bb38-4f6e-495f-acb8-e8e0d1730947-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823959 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.824318 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e960def-7bc7-4041-94dc-8ccea63f8bb8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.824322 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-images\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.825255 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.825354 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29dca8bf-7bce-455b-812f-fca8861518ca-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.825974 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.827627 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29dca8bf-7bce-455b-812f-fca8861518ca-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.846019 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.846203 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d68a50c-6a38-4aba-bb02-9a25712d2212-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.846860 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.847952 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7470431a-2a31-41ae-b021-510ae5e3c505-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848046 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848083 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-default-certificate\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848115 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848171 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-dir\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/537a87a4-8f58-441f-9199-62c5849c693c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848225 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848248 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848276 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c510b795-d750-4f94-bc9a-88ba625bd556-config\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848303 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-config\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848326 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-encryption-config\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848414 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7rfj\" (UniqueName: \"kubernetes.io/projected/b745a377-4575-45fb-a206-ea4754ecff76-kube-api-access-p7rfj\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848441 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e960def-7bc7-4041-94dc-8ccea63f8bb8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848467 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9cg8\" (UniqueName: \"kubernetes.io/projected/6742e18f-a187-4a77-a734-bdec89bd89e0-kube-api-access-c9cg8\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848494 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848523 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-policies\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848594 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr9fr\" (UniqueName: \"kubernetes.io/projected/5f2944a8-8d91-4461-aa64-8908ca17f59e-kube-api-access-dr9fr\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848621 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848647 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8465162e-dd9f-45b4-83a6-94666ac2b87b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848670 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4vrg\" (UniqueName: \"kubernetes.io/projected/27c4b3cb-57d3-4282-93fe-16cfab039277-kube-api-access-z4vrg\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848696 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848798 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5477x\" (UniqueName: \"kubernetes.io/projected/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-kube-api-access-5477x\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.849537 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821330 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zjqz6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.850330 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jvxv4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852428 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852497 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4b3cb-57d3-4282-93fe-16cfab039277-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852582 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n2nt\" (UniqueName: \"kubernetes.io/projected/52d94566-7844-4414-bf48-9122c2207dd6-kube-api-access-2n2nt\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852609 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852635 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852670 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-image-import-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.853201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.853235 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.853265 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8zr\" (UniqueName: \"kubernetes.io/projected/7470431a-2a31-41ae-b021-510ae5e3c505-kube-api-access-hn8zr\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.853288 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-trusted-ca-bundle\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.853313 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f74368-89f6-44fb-aaa2-9159a217b4d7-serving-cert\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854076 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854108 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29dca8bf-7bce-455b-812f-fca8861518ca-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854603 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-images\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854663 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/863eda44-9a47-42de-b2de-49234ac647f0-metrics-tls\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854694 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z962\" (UniqueName: \"kubernetes.io/projected/537a87a4-8f58-441f-9199-62c5849c693c-kube-api-access-4z962\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854722 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c4b3cb-57d3-4282-93fe-16cfab039277-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854749 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5h6z\" (UniqueName: \"kubernetes.io/projected/5c8e7010-8b57-47ed-9270-417650a2a7c5-kube-api-access-v5h6z\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854795 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6mtd\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-kube-api-access-r6mtd\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854824 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-metrics-tls\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854854 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-config\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854880 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4f8p\" (UniqueName: \"kubernetes.io/projected/8465162e-dd9f-45b4-83a6-94666ac2b87b-kube-api-access-d4f8p\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854907 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854934 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854961 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6742e18f-a187-4a77-a734-bdec89bd89e0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855334 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-bound-sa-token\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855365 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-service-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855392 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-serving-cert\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855444 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855475 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit-dir\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855497 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blg69\" (UniqueName: \"kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855528 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855525 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-trusted-ca\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855688 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.858356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.858472 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.859129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-trusted-ca\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.859719 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/537a87a4-8f58-441f-9199-62c5849c693c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.859897 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4b3cb-57d3-4282-93fe-16cfab039277-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.861226 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-config\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.861430 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.861562 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.862363 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-client\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.862708 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-serving-cert\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.863080 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.863959 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b745a377-4575-45fb-a206-ea4754ecff76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.864747 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-trusted-ca\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.865088 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-image-import-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.865285 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.865925 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-config\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.866011 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-service-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.867469 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.867799 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.867846 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-config\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.869204 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.869845 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-serving-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.870341 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8465162e-dd9f-45b4-83a6-94666ac2b87b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.870694 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.872185 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-serving-cert\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.872291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-encryption-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873037 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873116 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-config\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873139 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit-dir\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873529 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873540 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-dir\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873850 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.875171 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-node-pullsecrets\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.875355 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.875705 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-service-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.875730 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.875974 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-config\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.877084 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d68a50c-6a38-4aba-bb02-9a25712d2212-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.877741 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-trusted-ca-bundle\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.878893 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-policies\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.878986 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j4s5w"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.880339 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-llgd7"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.880452 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.880606 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-client\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.880859 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c4b3cb-57d3-4282-93fe-16cfab039277-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.881827 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e960def-7bc7-4041-94dc-8ccea63f8bb8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.881849 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f74368-89f6-44fb-aaa2-9159a217b4d7-serving-cert\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.882536 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.884922 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/863eda44-9a47-42de-b2de-49234ac647f0-metrics-tls\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.885212 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.887333 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-encryption-config\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.887566 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-metrics-tls\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.888190 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a87a4-8f58-441f-9199-62c5849c693c-serving-cert\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.888255 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.888556 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.888580 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-serving-cert\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.889318 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e1443d-dd18-4343-b200-756f9511c163-serving-cert\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.890307 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-machine-approver-tls\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.890474 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-whh46"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.891570 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kl9j4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.892576 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.893576 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.894390 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.894602 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-42f9f"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.896131 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-znm6j"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.896250 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.896777 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.897028 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.897760 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.898638 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wrqpb"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.899609 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f877x"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.900596 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-znm6j"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.906239 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.907669 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.908306 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.908848 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-42f9f"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.927868 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-etcd-client\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.928514 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.948089 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.957553 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbqhc\" (UniqueName: \"kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.957768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.958173 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8jwm\" (UniqueName: \"kubernetes.io/projected/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-kube-api-access-l8jwm\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.958369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f997bb38-4f6e-495f-acb8-e8e0d1730947-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.958570 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.958714 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.958925 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-profile-collector-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959106 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7470431a-2a31-41ae-b021-510ae5e3c505-proxy-tls\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959321 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-995tp\" (UniqueName: \"kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959552 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgqgf\" (UniqueName: \"kubernetes.io/projected/86ac2c23-01e6-4a22-a79d-77a3269fb5a0-kube-api-access-wgqgf\") pod \"migrator-59844c95c7-qpdx4\" (UID: \"86ac2c23-01e6-4a22-a79d-77a3269fb5a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959694 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959873 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959971 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-stats-auth\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959995 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5qlp\" (UniqueName: \"kubernetes.io/projected/f997bb38-4f6e-495f-acb8-e8e0d1730947-kube-api-access-n5qlp\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960200 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960314 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c510b795-d750-4f94-bc9a-88ba625bd556-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960672 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkzm\" (UniqueName: \"kubernetes.io/projected/0007a585-5b17-44bd-89b8-2d1d233a03d4-kube-api-access-gzkzm\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960856 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961012 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c8e7010-8b57-47ed-9270-417650a2a7c5-proxy-tls\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961110 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c510b795-d750-4f94-bc9a-88ba625bd556-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961211 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961308 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961544 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f997bb38-4f6e-495f-acb8-e8e0d1730947-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961750 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7470431a-2a31-41ae-b021-510ae5e3c505-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961939 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962066 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962199 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c510b795-d750-4f94-bc9a-88ba625bd556-config\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962367 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9cg8\" (UniqueName: \"kubernetes.io/projected/6742e18f-a187-4a77-a734-bdec89bd89e0-kube-api-access-c9cg8\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962481 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr9fr\" (UniqueName: \"kubernetes.io/projected/5f2944a8-8d91-4461-aa64-8908ca17f59e-kube-api-access-dr9fr\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962729 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7470431a-2a31-41ae-b021-510ae5e3c505-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962082 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962927 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.963169 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.963295 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8zr\" (UniqueName: \"kubernetes.io/projected/7470431a-2a31-41ae-b021-510ae5e3c505-kube-api-access-hn8zr\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.963532 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-images\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.963668 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5h6z\" (UniqueName: \"kubernetes.io/projected/5c8e7010-8b57-47ed-9270-417650a2a7c5-kube-api-access-v5h6z\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.963944 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964130 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964324 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6742e18f-a187-4a77-a734-bdec89bd89e0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964467 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964670 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964945 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pkjt\" (UniqueName: \"kubernetes.io/projected/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-kube-api-access-9pkjt\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964093 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.968602 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.988693 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.999523 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-default-certificate\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.007591 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.014120 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52d94566-7844-4414-bf48-9122c2207dd6-service-ca-bundle\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.027545 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.038708 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-metrics-certs\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.047722 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.068689 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.072366 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7470431a-2a31-41ae-b021-510ae5e3c505-proxy-tls\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.087731 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.108169 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.117673 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6742e18f-a187-4a77-a734-bdec89bd89e0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.128460 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.147494 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.168059 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.174590 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.188181 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.193638 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.208468 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.221364 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.227908 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.249327 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.270655 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.281287 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.288378 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.292426 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.309662 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.309739 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.309681 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.328341 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.337355 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.337570 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.348444 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.356303 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.360133 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.374309 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.383005 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.389511 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.408909 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.414632 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.435586 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.443046 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.448164 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.469094 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.474270 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.488512 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.508212 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.557294 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-images\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.558655 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.558900 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.565323 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f997bb38-4f6e-495f-acb8-e8e0d1730947-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.569026 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.588031 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.589690 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f997bb38-4f6e-495f-acb8-e8e0d1730947-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.607591 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.628802 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.637338 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c8e7010-8b57-47ed-9270-417650a2a7c5-proxy-tls\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.649018 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.667443 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.689247 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.708444 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.728155 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.748851 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.756416 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.766646 4881 request.go:700] Waited for 1.002050161s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&limit=500&resourceVersion=0 Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.768521 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.788545 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.793895 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c510b795-d750-4f94-bc9a-88ba625bd556-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.808856 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.828557 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.848025 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.853570 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c510b795-d750-4f94-bc9a-88ba625bd556-config\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.868192 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.872923 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.889188 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.914781 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.924484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.928457 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.932311 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-profile-collector-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.935411 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.948271 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.960475 4881 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.960808 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert podName:c56c4a24-e6c6-4aa0-8a62-94d3179dfe54 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:31.460766809 +0000 UTC m=+98.720723278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert") pod "catalog-operator-68c6474976-7gdkq" (UID: "c56c4a24-e6c6-4aa0-8a62-94d3179dfe54") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.962676 4881 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.962774 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle podName:5f2944a8-8d91-4461-aa64-8908ca17f59e nodeName:}" failed. No retries permitted until 2026-01-21 10:58:31.462755787 +0000 UTC m=+98.722712246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle") pod "service-ca-9c57cc56f-llgd7" (UID: "5f2944a8-8d91-4461-aa64-8908ca17f59e") : failed to sync configmap cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.964941 4881 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.964981 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key podName:5f2944a8-8d91-4461-aa64-8908ca17f59e nodeName:}" failed. No retries permitted until 2026-01-21 10:58:31.464972072 +0000 UTC m=+98.724928541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key") pod "service-ca-9c57cc56f-llgd7" (UID: "5f2944a8-8d91-4461-aa64-8908ca17f59e") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.964997 4881 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.965077 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert podName:0007a585-5b17-44bd-89b8-2d1d233a03d4 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:31.465055454 +0000 UTC m=+98.725011923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert") pod "olm-operator-6b444d44fb-zkkpc" (UID: "0007a585-5b17-44bd-89b8-2d1d233a03d4") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.967699 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.987656 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.008123 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.027818 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.048372 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.068714 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.089038 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.108616 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.129705 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.148779 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.187977 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.208734 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.229094 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.248164 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.268911 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.288442 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.308565 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.310432 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.328989 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.348336 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.370410 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.389111 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.408070 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.427895 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.448163 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.467658 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.489494 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.503689 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.503952 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.504003 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.504149 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.505484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.509408 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.511613 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.511758 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.535542 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czg99\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-kube-api-access-czg99\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.549694 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72phf\" (UniqueName: \"kubernetes.io/projected/29dca8bf-7bce-455b-812f-fca8861518ca-kube-api-access-72phf\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.565239 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshkt\" (UniqueName: \"kubernetes.io/projected/f1f74368-89f6-44fb-aaa2-9159a217b4d7-kube-api-access-qshkt\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.587664 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8zf\" (UniqueName: \"kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.595549 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.605687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghfkh\" (UniqueName: \"kubernetes.io/projected/3201b51c-af63-40e7-8037-9e581791d495-kube-api-access-ghfkh\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.620382 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.627573 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gck6q\" (UniqueName: \"kubernetes.io/projected/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-kube-api-access-gck6q\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.648098 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrlb\" (UniqueName: \"kubernetes.io/projected/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-kube-api-access-mhrlb\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.731258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-bound-sa-token\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.786639 4881 request.go:700] Waited for 1.918170143s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.811291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4f8p\" (UniqueName: \"kubernetes.io/projected/8465162e-dd9f-45b4-83a6-94666ac2b87b-kube-api-access-d4f8p\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.834202 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cldhz\" (UniqueName: \"kubernetes.io/projected/628cb8f4-a587-498f-9398-403e0af5eec4-kube-api-access-cldhz\") pod \"downloads-7954f5f757-wrqpb\" (UID: \"628cb8f4-a587-498f-9398-403e0af5eec4\") " pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.861348 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.861568 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.862233 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.863656 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.865069 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.865403 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.885173 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.887384 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hxmk\" (UniqueName: \"kubernetes.io/projected/863eda44-9a47-42de-b2de-49234ac647f0-kube-api-access-8hxmk\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.893755 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e960def-7bc7-4041-94dc-8ccea63f8bb8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.896916 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.901183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blg69\" (UniqueName: \"kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.901907 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.903063 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n2nt\" (UniqueName: \"kubernetes.io/projected/52d94566-7844-4414-bf48-9122c2207dd6-kube-api-access-2n2nt\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.909457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kgjc\" (UniqueName: \"kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.912814 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5477x\" (UniqueName: \"kubernetes.io/projected/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-kube-api-access-5477x\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.989678 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.009207 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z962\" (UniqueName: \"kubernetes.io/projected/537a87a4-8f58-441f-9199-62c5849c693c-kube-api-access-4z962\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.021680 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6mtd\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-kube-api-access-r6mtd\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.026661 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7rfj\" (UniqueName: \"kubernetes.io/projected/b745a377-4575-45fb-a206-ea4754ecff76-kube-api-access-p7rfj\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.031534 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.032156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.032531 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.032542 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.057772 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4vrg\" (UniqueName: \"kubernetes.io/projected/27c4b3cb-57d3-4282-93fe-16cfab039277-kube-api-access-z4vrg\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.060018 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.075228 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.075430 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.075799 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.075851 4881 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.076112 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ppts\" (UniqueName: \"kubernetes.io/projected/96e1443d-dd18-4343-b200-756f9511c163-kube-api-access-7ppts\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.078231 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.088199 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.097107 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.109936 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.259210 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.259498 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.259587 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.262989 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbqhc\" (UniqueName: \"kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.278255 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-995tp\" (UniqueName: \"kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.285363 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgqgf\" (UniqueName: \"kubernetes.io/projected/86ac2c23-01e6-4a22-a79d-77a3269fb5a0-kube-api-access-wgqgf\") pod \"migrator-59844c95c7-qpdx4\" (UID: \"86ac2c23-01e6-4a22-a79d-77a3269fb5a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.288593 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8jwm\" (UniqueName: \"kubernetes.io/projected/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-kube-api-access-l8jwm\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.292115 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5qlp\" (UniqueName: \"kubernetes.io/projected/f997bb38-4f6e-495f-acb8-e8e0d1730947-kube-api-access-n5qlp\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.296618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9cg8\" (UniqueName: \"kubernetes.io/projected/6742e18f-a187-4a77-a734-bdec89bd89e0-kube-api-access-c9cg8\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.342668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr9fr\" (UniqueName: \"kubernetes.io/projected/5f2944a8-8d91-4461-aa64-8908ca17f59e-kube-api-access-dr9fr\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.343542 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkzm\" (UniqueName: \"kubernetes.io/projected/0007a585-5b17-44bd-89b8-2d1d233a03d4-kube-api-access-gzkzm\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.344636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c510b795-d750-4f94-bc9a-88ba625bd556-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.362694 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8zr\" (UniqueName: \"kubernetes.io/projected/7470431a-2a31-41ae-b021-510ae5e3c505-kube-api-access-hn8zr\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.363338 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.363685 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pkjt\" (UniqueName: \"kubernetes.io/projected/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-kube-api-access-9pkjt\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.364615 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" event={"ID":"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c","Type":"ContainerStarted","Data":"d402858a5ef5514fec0754a973317b2de9ad2aaad9b3baa96045e00080574752"} Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.365388 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.366880 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5h6z\" (UniqueName: \"kubernetes.io/projected/5c8e7010-8b57-47ed-9270-417650a2a7c5-kube-api-access-v5h6z\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.367172 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.398362 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.414324 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.425165 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.434340 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.440293 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.452945 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.455849 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.472044 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473103 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473450 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6ljz\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473516 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473568 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-apiservice-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473592 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473609 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-webhook-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473710 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7f30da15-7c75-4c87-9dc4-78653d6f84cd-tmpfs\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473809 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnzz9\" (UniqueName: \"kubernetes.io/projected/7f30da15-7c75-4c87-9dc4-78653d6f84cd-kube-api-access-cnzz9\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473835 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473894 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473922 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.477216 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:32.977198664 +0000 UTC m=+100.237155123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.478423 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.539931 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.543278 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.543941 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.544453 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.544743 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.544927 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.545027 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580257 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7f30da15-7c75-4c87-9dc4-78653d6f84cd-tmpfs\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580594 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580660 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-config\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580679 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qxx\" (UniqueName: \"kubernetes.io/projected/409e44ed-8f6d-4321-9620-d8da23cf0fec-kube-api-access-b7qxx\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580724 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-csi-data-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580747 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnzz9\" (UniqueName: \"kubernetes.io/projected/7f30da15-7c75-4c87-9dc4-78653d6f84cd-kube-api-access-cnzz9\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580762 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqf99\" (UniqueName: \"kubernetes.io/projected/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-kube-api-access-pqf99\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580923 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580984 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-serving-cert\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581024 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86acb693-c0d9-41f4-b33c-4716963ce268-cert\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581038 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-node-bootstrap-token\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581088 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkcp2\" (UniqueName: \"kubernetes.io/projected/bc38f0b5-944c-40ae-aed0-50ca39ea2627-kube-api-access-pkcp2\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581129 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581180 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581248 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e58845-f0a1-4320-b879-0765b6d57988-config-volume\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581264 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45nv\" (UniqueName: \"kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581357 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6ljz\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581374 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581389 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-plugins-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581457 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-socket-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581482 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581513 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc5l6\" (UniqueName: \"kubernetes.io/projected/b7e58845-f0a1-4320-b879-0765b6d57988-kube-api-access-vc5l6\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581538 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvwd\" (UniqueName: \"kubernetes.io/projected/dc0d7d08-d133-4880-a391-e8750932d507-kube-api-access-sqvwd\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581612 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-apiservice-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581660 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-webhook-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581692 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gc7m\" (UniqueName: \"kubernetes.io/projected/86acb693-c0d9-41f4-b33c-4716963ce268-kube-api-access-6gc7m\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581800 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-registration-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581828 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc38f0b5-944c-40ae-aed0-50ca39ea2627-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581902 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-certs\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581920 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-mountpoint-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581935 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7e58845-f0a1-4320-b879-0765b6d57988-metrics-tls\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.582489 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.08246584 +0000 UTC m=+100.342422299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.584521 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.584605 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7f30da15-7c75-4c87-9dc4-78653d6f84cd-tmpfs\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.587755 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.594628 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.599983 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.611682 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-apiservice-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.613654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-webhook-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.627260 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6ljz\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.638570 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.661183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.675421 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnzz9\" (UniqueName: \"kubernetes.io/projected/7f30da15-7c75-4c87-9dc4-78653d6f84cd-kube-api-access-cnzz9\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683686 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gc7m\" (UniqueName: \"kubernetes.io/projected/86acb693-c0d9-41f4-b33c-4716963ce268-kube-api-access-6gc7m\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683757 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-registration-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683794 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc38f0b5-944c-40ae-aed0-50ca39ea2627-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683864 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-certs\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683881 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-mountpoint-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683899 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7e58845-f0a1-4320-b879-0765b6d57988-metrics-tls\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683933 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683952 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-config\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683972 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7qxx\" (UniqueName: \"kubernetes.io/projected/409e44ed-8f6d-4321-9620-d8da23cf0fec-kube-api-access-b7qxx\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683990 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-csi-data-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684008 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqf99\" (UniqueName: \"kubernetes.io/projected/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-kube-api-access-pqf99\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684044 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684075 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-serving-cert\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684090 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86acb693-c0d9-41f4-b33c-4716963ce268-cert\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684106 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-node-bootstrap-token\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684127 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkcp2\" (UniqueName: \"kubernetes.io/projected/bc38f0b5-944c-40ae-aed0-50ca39ea2627-kube-api-access-pkcp2\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684145 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684168 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e58845-f0a1-4320-b879-0765b6d57988-config-volume\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-registration-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l45nv\" (UniqueName: \"kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684295 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-plugins-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684354 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-socket-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684400 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc5l6\" (UniqueName: \"kubernetes.io/projected/b7e58845-f0a1-4320-b879-0765b6d57988-kube-api-access-vc5l6\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvwd\" (UniqueName: \"kubernetes.io/projected/dc0d7d08-d133-4880-a391-e8750932d507-kube-api-access-sqvwd\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684709 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-plugins-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684825 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-socket-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.685094 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-mountpoint-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.685129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.685326 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-csi-data-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.686955 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.186939266 +0000 UTC m=+100.446895915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.687165 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-config\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.688136 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e58845-f0a1-4320-b879-0765b6d57988-config-volume\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.707216 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-serving-cert\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.707668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-node-bootstrap-token\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.709859 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-certs\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.709961 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7e58845-f0a1-4320-b879-0765b6d57988-metrics-tls\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.711809 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc38f0b5-944c-40ae-aed0-50ca39ea2627-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.713674 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86acb693-c0d9-41f4-b33c-4716963ce268-cert\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.714673 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.734346 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gc7m\" (UniqueName: \"kubernetes.io/projected/86acb693-c0d9-41f4-b33c-4716963ce268-kube-api-access-6gc7m\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.779941 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.785078 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.785293 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.28526018 +0000 UTC m=+100.545216659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.785539 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.786226 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.286209674 +0000 UTC m=+100.546166143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.806398 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc5l6\" (UniqueName: \"kubernetes.io/projected/b7e58845-f0a1-4320-b879-0765b6d57988-kube-api-access-vc5l6\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.817130 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvwd\" (UniqueName: \"kubernetes.io/projected/dc0d7d08-d133-4880-a391-e8750932d507-kube-api-access-sqvwd\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.820579 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l45nv\" (UniqueName: \"kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.828956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkcp2\" (UniqueName: \"kubernetes.io/projected/bc38f0b5-944c-40ae-aed0-50ca39ea2627-kube-api-access-pkcp2\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.833000 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7qxx\" (UniqueName: \"kubernetes.io/projected/409e44ed-8f6d-4321-9620-d8da23cf0fec-kube-api-access-b7qxx\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.845654 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.849382 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqf99\" (UniqueName: \"kubernetes.io/projected/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-kube-api-access-pqf99\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.863110 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.863642 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.875638 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.887057 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.887393 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.387377639 +0000 UTC m=+100.647334108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.908284 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.917556 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.988116 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.988669 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.488654716 +0000 UTC m=+100.748611185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.090422 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.092057 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.592025365 +0000 UTC m=+100.851981834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.093830 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.097141 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.59711702 +0000 UTC m=+100.857073499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.136584 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.196453 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.196918 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.696901881 +0000 UTC m=+100.956858340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.297736 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.298098 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.798084956 +0000 UTC m=+101.058041425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.369228 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" event={"ID":"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c","Type":"ContainerStarted","Data":"b612ece999ac387cc8c5c1776465ef7f8d185dabd4a70b1869b7f4b1da0a539e"} Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.370588 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-468h5" event={"ID":"dc0d7d08-d133-4880-a391-e8750932d507","Type":"ContainerStarted","Data":"969abce82a4756be549f93f591a3a1570c4abc95cb16b5c762a08c96568626b5"} Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.370749 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-468h5" event={"ID":"dc0d7d08-d133-4880-a391-e8750932d507","Type":"ContainerStarted","Data":"c21ffc83923832013770136f03a6bcebbad73c3ba8141faa9374f398252e99d1"} Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.372043 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-v7wnh" event={"ID":"52d94566-7844-4414-bf48-9122c2207dd6","Type":"ContainerStarted","Data":"7b763d882cbb654ffa22e465972973b093a08e49c3b49a08597217f1665401de"} Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.372090 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-v7wnh" event={"ID":"52d94566-7844-4414-bf48-9122c2207dd6","Type":"ContainerStarted","Data":"c675755f41e28c775bdb8abb860df6e5c252ec3742596b9c9d30f78cad4f1d8e"} Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.398314 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.399392 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.899362424 +0000 UTC m=+101.159318923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.571432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.571935 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.071918322 +0000 UTC m=+101.331874791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.681429 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.681912 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.181888872 +0000 UTC m=+101.441845351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.786822 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.787313 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.287291312 +0000 UTC m=+101.547247801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.902881 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.903011 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.402971702 +0000 UTC m=+101.662928171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.904924 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.905527 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.405510025 +0000 UTC m=+101.665466494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.006023 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.009569 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.50954433 +0000 UTC m=+101.769500799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.118068 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.118721 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.618701831 +0000 UTC m=+101.878658290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.119271 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.219778 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.220180 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.720163303 +0000 UTC m=+101.980119772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.335258 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-468h5" podStartSLOduration=5.335238189 podStartE2EDuration="5.335238189s" podCreationTimestamp="2026-01-21 10:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:34.334994604 +0000 UTC m=+101.594951073" watchObservedRunningTime="2026-01-21 10:58:34.335238189 +0000 UTC m=+101.595194658" Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.337401 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-v7wnh" podStartSLOduration=79.337392393 podStartE2EDuration="1m19.337392393s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:34.304500244 +0000 UTC m=+101.564456713" watchObservedRunningTime="2026-01-21 10:58:34.337392393 +0000 UTC m=+101.597348862" Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.425840 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.426239 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.926227474 +0000 UTC m=+102.186183943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.527641 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.528057 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.028026505 +0000 UTC m=+102.287982974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.629352 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.629925 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.129907087 +0000 UTC m=+102.389863556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.688262 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.688337 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.730502 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.730652 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.23062426 +0000 UTC m=+102.490580739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.730718 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.731098 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.231087992 +0000 UTC m=+102.491044461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.831923 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.832137 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.332110783 +0000 UTC m=+102.592067252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.832208 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.832540 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.332525873 +0000 UTC m=+102.592482332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.932999 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.933209 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.433182075 +0000 UTC m=+102.693138544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.933439 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.933809 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.433796441 +0000 UTC m=+102.693752910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.034441 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.034604 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.534579846 +0000 UTC m=+102.794536315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.034719 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.035347 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.535325884 +0000 UTC m=+102.795282393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.114623 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:35 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:35 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:35 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.115107 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.135548 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.135764 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.635734151 +0000 UTC m=+102.895690660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.136437 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.136956 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.6369342 +0000 UTC m=+102.896890709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.240540 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.241520 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.741500658 +0000 UTC m=+103.001457127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.343315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.343719 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.843703718 +0000 UTC m=+103.103660197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.444827 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.446266 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.946245787 +0000 UTC m=+103.206202256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.448755 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" event={"ID":"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c","Type":"ContainerStarted","Data":"11112e84ed0dda9f2ed7f2f8fa157e44126b69816a49cff9a91f43262ef2598d"} Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.548001 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.548448 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.048431177 +0000 UTC m=+103.308387646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.628006 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" podStartSLOduration=81.62798424 podStartE2EDuration="1m21.62798424s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:35.472275446 +0000 UTC m=+102.732231915" watchObservedRunningTime="2026-01-21 10:58:35.62798424 +0000 UTC m=+102.887940709" Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.628372 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.633224 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wjlxh"] Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.635592 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n2h44"] Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.646446 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w"] Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.648964 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-svmbc"] Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.649029 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.650822 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.150791381 +0000 UTC m=+103.410747940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.752805 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.753164 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.253151235 +0000 UTC m=+103.513107694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.853730 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.853898 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.353880069 +0000 UTC m=+103.613836538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.854007 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.854255 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.354247728 +0000 UTC m=+103.614204197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.957431 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.957619 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.457596476 +0000 UTC m=+103.717552945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.957896 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.958270 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.458257432 +0000 UTC m=+103.718213901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.059024 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.059260 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.559228252 +0000 UTC m=+103.819184731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.059598 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.060192 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.560164525 +0000 UTC m=+103.820121034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.067988 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.070868 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rslv2"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.087187 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.103068 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.104939 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.115440 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:36 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:36 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:36 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.115491 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.119384 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.142268 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jvxv4"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.144707 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zjqz6"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.153863 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h97cd"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.153922 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-whh46"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.154175 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.160612 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.160884 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.161041 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.661022153 +0000 UTC m=+103.920978622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.163754 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.163837 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.168551 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:36 crc kubenswrapper[4881]: W0121 10:58:36.171527 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e960def_7bc7_4041_94dc_8ccea63f8bb8.slice/crio-0790a402c93806fd2f05db80cba862f512e12e5dd1ae94ff92722face7b15059 WatchSource:0}: Error finding container 0790a402c93806fd2f05db80cba862f512e12e5dd1ae94ff92722face7b15059: Status 404 returned error can't find the container with id 0790a402c93806fd2f05db80cba862f512e12e5dd1ae94ff92722face7b15059 Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.176727 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.183268 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wrqpb"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.185279 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cclnc"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.187137 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.197930 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f877x"] Jan 21 10:58:36 crc kubenswrapper[4881]: W0121 10:58:36.206404 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86ac2c23_01e6_4a22_a79d_77a3269fb5a0.slice/crio-79fceb069012ae79a981dcdc297ad76c1e3189b6f4784ea3791d374fc4482001 WatchSource:0}: Error finding container 79fceb069012ae79a981dcdc297ad76c1e3189b6f4784ea3791d374fc4482001: Status 404 returned error can't find the container with id 79fceb069012ae79a981dcdc297ad76c1e3189b6f4784ea3791d374fc4482001 Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.218354 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-znm6j"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.231035 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.234199 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.240304 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.243249 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j4s5w"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.244985 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.261916 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.262193 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.762179717 +0000 UTC m=+104.022136186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.359483 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.363514 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.363693 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.86366538 +0000 UTC m=+104.123621849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.363871 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.364225 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.864215393 +0000 UTC m=+104.124172052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.460590 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kl9j4"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.464816 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.464967 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.964944117 +0000 UTC m=+104.224900586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.465726 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.466232 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.966216668 +0000 UTC m=+104.226173317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.470102 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" event={"ID":"c510b795-d750-4f94-bc9a-88ba625bd556","Type":"ContainerStarted","Data":"8edb718f49287ee1e5992d45a6b5d6efe3fc50ba77f6eae4e83b19f6c3c44a42"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.471439 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" event={"ID":"5d68a50c-6a38-4aba-bb02-9a25712d2212","Type":"ContainerStarted","Data":"75ffb299185e9e6d371ecbdb7eb473f4b6ff637b2eba3dc8b863fdf20d1ae25c"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.476076 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wrqpb" event={"ID":"628cb8f4-a587-498f-9398-403e0af5eec4","Type":"ContainerStarted","Data":"763ae62f18116f1fe4593545b01b2553ad3792b2e87cdae45827fd67eae883d2"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.478585 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" event={"ID":"0007a585-5b17-44bd-89b8-2d1d233a03d4","Type":"ContainerStarted","Data":"4f564ff03cbbaa0f8042cde333f5b5b3cea9b7169727da8459685bf907581ef3"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.479364 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.482325 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" event={"ID":"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad","Type":"ContainerStarted","Data":"216606908c8b27d34a9f3f57e132945839e5bd3eae4f856f2671c9e8308d7423"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.499847 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.521534 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-42f9f"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.539406 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.557276 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-llgd7"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.559566 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" event={"ID":"5c8e7010-8b57-47ed-9270-417650a2a7c5","Type":"ContainerStarted","Data":"ea296fe97b057f6f0df6ff84011de8b3bc8a0c8c0c89e26121d88777e3751daa"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.566521 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.567277 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.06724363 +0000 UTC m=+104.327200099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.588916 4881 csr.go:261] certificate signing request csr-d74tp is approved, waiting to be issued Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.591578 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" event={"ID":"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57","Type":"ContainerStarted","Data":"c802ddbd8e9b079a0a6e4ee9d0dd87824bf3cb502a0912f44e02f0cca256b8e4"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.593386 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.595444 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" event={"ID":"537a87a4-8f58-441f-9199-62c5849c693c","Type":"ContainerStarted","Data":"80b7b3ce063567cc1fbf487ef2d0e5ee3c9f8664a2046c9a8fae503691d6d224"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.596941 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" event={"ID":"706c6a3b-823b-4ea3-b7a8-e20d571d3ace","Type":"ContainerStarted","Data":"22d022e22752b1a845c64ff7297933c2f9f91e223d3640540e2ab737fe1ace78"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.598289 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" event={"ID":"0ceebcd8-2c53-4e4d-97bb-5d81008a6442","Type":"ContainerStarted","Data":"447a68b7525d82522d86c9766479b34dac564e482edb660c3decf67342a91ca6"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.600693 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" event={"ID":"29dca8bf-7bce-455b-812f-fca8861518ca","Type":"ContainerStarted","Data":"665263747c5f8cab9e1f53a92fa637a838a92a1c9eff3ee375c09f2912a7f3ff"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.602630 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" event={"ID":"1e960def-7bc7-4041-94dc-8ccea63f8bb8","Type":"ContainerStarted","Data":"0790a402c93806fd2f05db80cba862f512e12e5dd1ae94ff92722face7b15059"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.604210 4881 csr.go:257] certificate signing request csr-d74tp is issued Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.614605 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" event={"ID":"146cbde4-d891-47d8-a09f-d4f4b50bfe6d","Type":"ContainerStarted","Data":"d68cad796c69f936cad4980c773067b142f355f7552d6b0961feb10ece906af6"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.618068 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qxzd9" event={"ID":"bb8fc8b3-9818-40e2-afb2-860e2d1efae1","Type":"ContainerStarted","Data":"d060bd9f87ed03936c0be9ee17418f9087722140490e6ad49375f3c789b2e023"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.619638 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.623679 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" event={"ID":"6742e18f-a187-4a77-a734-bdec89bd89e0","Type":"ContainerStarted","Data":"d24e8a3dde9ad4c180d564caa8a04bc0ccde594c7182df9414c0736c020bf2cf"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.634820 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" event={"ID":"303bdbe6-3bb4-4ace-86b1-f489c795580f","Type":"ContainerStarted","Data":"b3d019b82236dd15b24f4a31ba5ebc67107e80ee3f592acc46c51b2bbe16aba5"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.635951 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" event={"ID":"f997bb38-4f6e-495f-acb8-e8e0d1730947","Type":"ContainerStarted","Data":"befe8bd3ce126f78f32908d2279e0d5e1763ebfdf011f99e818f13ef4ab1771f"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.637654 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" event={"ID":"96e1443d-dd18-4343-b200-756f9511c163","Type":"ContainerStarted","Data":"109869b853f39c175423d29e72a66cb9bb0801e9b6b3b8a0e533ada32404b37e"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.639638 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" event={"ID":"27c4b3cb-57d3-4282-93fe-16cfab039277","Type":"ContainerStarted","Data":"546be78d7fa80cb5217f9ec956561952bcb0ad7e720be5961027598bb51fa46c"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.641639 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" event={"ID":"86ac2c23-01e6-4a22-a79d-77a3269fb5a0","Type":"ContainerStarted","Data":"79fceb069012ae79a981dcdc297ad76c1e3189b6f4784ea3791d374fc4482001"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.642311 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.644439 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" event={"ID":"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb","Type":"ContainerStarted","Data":"45f19fd34c35f1237d72f2fec0fc6c65d58ffab5dace1b67d0280f650700ba1e"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.668066 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.669175 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.169155483 +0000 UTC m=+104.429111952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.674370 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-znm6j" event={"ID":"b7e58845-f0a1-4320-b879-0765b6d57988","Type":"ContainerStarted","Data":"64cb4c239b87efb7cd9b98d2d413218f385bd070aff7cdefce602a2185c738ce"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.676614 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" event={"ID":"f1f74368-89f6-44fb-aaa2-9159a217b4d7","Type":"ContainerStarted","Data":"0ab556548ff44637ee5a7cefce9e8d6aecb22153bf70df9fc5dadbbc343f7eec"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.678267 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.681879 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" event={"ID":"863eda44-9a47-42de-b2de-49234ac647f0","Type":"ContainerStarted","Data":"b72f810c040ef84ae1cad3cba480a5966669a8f0f0c8fbf4634e0daffff50f1e"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.683014 4881 patch_prober.go:28] interesting pod/console-operator-58897d9998-zjqz6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.683052 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" podUID="f1f74368-89f6-44fb-aaa2-9159a217b4d7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.683932 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" event={"ID":"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f","Type":"ContainerStarted","Data":"82c7d32520d2436d4f7a9663e687243c491257f7dc62b2e72e1981db2f9c8144"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.687459 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" event={"ID":"8465162e-dd9f-45b4-83a6-94666ac2b87b","Type":"ContainerStarted","Data":"a33b3cb1960cd9728cac6829f5670abf510f7506478b90d9f1a890f442173bb0"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.693521 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" event={"ID":"3201b51c-af63-40e7-8037-9e581791d495","Type":"ContainerStarted","Data":"7b5feb131a5e4a06103b5280c54ad0837a19c465a0aa933409bc7c15f7f0734f"} Jan 21 10:58:36 crc kubenswrapper[4881]: W0121 10:58:36.746840 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86acb693_c0d9_41f4_b33c_4716963ce268.slice/crio-758110cf0b46064de00bb150d4a98573f91f8fdf43e0f8ade86d25a387cec9db WatchSource:0}: Error finding container 758110cf0b46064de00bb150d4a98573f91f8fdf43e0f8ade86d25a387cec9db: Status 404 returned error can't find the container with id 758110cf0b46064de00bb150d4a98573f91f8fdf43e0f8ade86d25a387cec9db Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.754607 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" event={"ID":"002a39eb-e2e0-4d3e-8f61-89a539a653a9","Type":"ContainerStarted","Data":"fec206b72c4648e66af3adcacd7cb5106e2766bcb34d529fae1cd757bd777535"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.754811 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.758047 4881 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wjlxh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.758099 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.769566 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.772143 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.271918146 +0000 UTC m=+104.531874635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: W0121 10:58:36.775026 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc38f0b5_944c_40ae_aed0_50ca39ea2627.slice/crio-a91a58002d4d6f4f72bda9c7484e2bb65cd6b6f5f5601a84f2427afb828fb570 WatchSource:0}: Error finding container a91a58002d4d6f4f72bda9c7484e2bb65cd6b6f5f5601a84f2427afb828fb570: Status 404 returned error can't find the container with id a91a58002d4d6f4f72bda9c7484e2bb65cd6b6f5f5601a84f2427afb828fb570 Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.783276 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" podStartSLOduration=81.783255605 podStartE2EDuration="1m21.783255605s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:36.694436424 +0000 UTC m=+103.954392903" watchObservedRunningTime="2026-01-21 10:58:36.783255605 +0000 UTC m=+104.043212064" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.904088 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.904532 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.404517774 +0000 UTC m=+104.664474243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.004670 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.005470 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.505451723 +0000 UTC m=+104.765408192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.109175 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.110250 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.610237086 +0000 UTC m=+104.870193555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.118605 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:37 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:37 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:37 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.118645 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.210568 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.210731 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.710702854 +0000 UTC m=+104.970659313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.211691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.212398 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.712383105 +0000 UTC m=+104.972339574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.349990 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.350766 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.850739394 +0000 UTC m=+105.110695863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.351364 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.351852 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.851838171 +0000 UTC m=+105.111794640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.459282 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.459664 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.959647719 +0000 UTC m=+105.219604188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.562032 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.562306 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.062293679 +0000 UTC m=+105.322250148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.611850 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 10:53:36 +0000 UTC, rotation deadline is 2026-10-24 06:14:37.839131868 +0000 UTC Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.611884 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6619h16m0.227250451s for next certificate rotation Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.667640 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.668300 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.168277663 +0000 UTC m=+105.428234132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.668853 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.669190 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.169182734 +0000 UTC m=+105.429139203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.770290 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.770919 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.270896803 +0000 UTC m=+105.530853272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.798347 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" event={"ID":"7f30da15-7c75-4c87-9dc4-78653d6f84cd","Type":"ContainerStarted","Data":"a5013592e5c35fc53140f3477485624f58e610e910f930df124104e361b84262"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.805654 4881 generic.go:334] "Generic (PLEG): container finished" podID="537a87a4-8f58-441f-9199-62c5849c693c" containerID="0a0a4a7159c4ae5e1ca01e1e58266bb2b9687170b75097cbf61c3f3b4f8bda14" exitCode=0 Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.805736 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" event={"ID":"537a87a4-8f58-441f-9199-62c5849c693c","Type":"ContainerDied","Data":"0a0a4a7159c4ae5e1ca01e1e58266bb2b9687170b75097cbf61c3f3b4f8bda14"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.809568 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" event={"ID":"27c4b3cb-57d3-4282-93fe-16cfab039277","Type":"ContainerStarted","Data":"b60d23e022ddf3d5f79a677eec9a91d2de918a75469e7207637b9578d8a94ec8"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.811362 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" event={"ID":"303bdbe6-3bb4-4ace-86b1-f489c795580f","Type":"ContainerStarted","Data":"2f6a1a1e4268540ee682b58127eb41126b116ba4e30186b584ee325d0961ebec"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.812959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" event={"ID":"706c6a3b-823b-4ea3-b7a8-e20d571d3ace","Type":"ContainerStarted","Data":"9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.813662 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.825400 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" event={"ID":"002a39eb-e2e0-4d3e-8f61-89a539a653a9","Type":"ContainerStarted","Data":"6b8fc2aac0518f9de92cee69b4b59a05f08ed2161c480a5655d85171be0e5a8b"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.927155 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.928316 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.428297329 +0000 UTC m=+105.688254008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.930235 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" event={"ID":"29dca8bf-7bce-455b-812f-fca8861518ca","Type":"ContainerStarted","Data":"b85b87998c08018dbb35f00249d9602951f94a6261f25f4978adba90ce64b127"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.937707 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" podStartSLOduration=82.937673889 podStartE2EDuration="1m22.937673889s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:36.785187073 +0000 UTC m=+104.045143542" watchObservedRunningTime="2026-01-21 10:58:37.937673889 +0000 UTC m=+105.197630358" Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.948206 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" event={"ID":"3201b51c-af63-40e7-8037-9e581791d495","Type":"ContainerStarted","Data":"b653068c58321173ed5dbd8e4e933839f3338650924c22f27cb139db0b90ffe4"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.950438 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" event={"ID":"96e1443d-dd18-4343-b200-756f9511c163","Type":"ContainerStarted","Data":"d11972dea06114e95feae0748a3287e910f887f3cb8603d81723a20528613969"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.951896 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" event={"ID":"2957ef21-9f30-4c81-8c6a-4a7f9d7315db","Type":"ContainerStarted","Data":"e8cb541f96d7a4ec14d9a3260ed76cf4f3c8fd2e5d5a593d5d8bec92ed22c9a9"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.952958 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" event={"ID":"5c8e7010-8b57-47ed-9270-417650a2a7c5","Type":"ContainerStarted","Data":"28f8e69023156cf8f9966f5f6a94a44ae9e681e64b2e5ebc4c585613bcd6eea4"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.954022 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" event={"ID":"409e44ed-8f6d-4321-9620-d8da23cf0fec","Type":"ContainerStarted","Data":"0a1c334b89e7e575b7c043f32c04a7431a6ac04ac6256966d01bbc7cc00aad26"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.955188 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" event={"ID":"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54","Type":"ContainerStarted","Data":"6a35c30526df04d0205ad662fc3bb9f352a26dfd4273236fe9c24b4ffbe74b53"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.962509 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57" containerID="6f04d5d4e813545e106e07923bb6b0e2a0341cba5339d2bf5c5d9a0d6610f808" exitCode=0 Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.962561 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" event={"ID":"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57","Type":"ContainerDied","Data":"6f04d5d4e813545e106e07923bb6b0e2a0341cba5339d2bf5c5d9a0d6610f808"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.978331 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" event={"ID":"0ceebcd8-2c53-4e4d-97bb-5d81008a6442","Type":"ContainerStarted","Data":"efc58c3509ff202fa895654e7d0ac50244b04c0ab623a0b41ed93222292364c4"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.979516 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wrqpb" event={"ID":"628cb8f4-a587-498f-9398-403e0af5eec4","Type":"ContainerStarted","Data":"8ac6e934bf2c65c273e37127eb78e3c49f6ab743027f68c7c31810cbe67f929a"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.980377 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.981596 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.981640 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.983600 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" event={"ID":"f1f74368-89f6-44fb-aaa2-9159a217b4d7","Type":"ContainerStarted","Data":"ab69032099ffb0c7c07dfa25b0cc882b8ffc1cd68bb960103922cd624933ac71"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.985699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" event={"ID":"e94f1e92-21b2-44c9-b499-b879850c288d","Type":"ContainerStarted","Data":"123c57f996d77041997b15262c61902d2eed5d15c9314dac5b070f52214a0ad3"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.986663 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" event={"ID":"7470431a-2a31-41ae-b021-510ae5e3c505","Type":"ContainerStarted","Data":"be3a5105ee1882e177d05b3246339ad51f6d68a1328dbb49c8d87d096b42f33b"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.988599 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" event={"ID":"f997bb38-4f6e-495f-acb8-e8e0d1730947","Type":"ContainerStarted","Data":"3feb878b277a04b1568c451615458cb131092ba8b5c93591ab07f1fc8b5f5092"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.991312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" event={"ID":"5d68a50c-6a38-4aba-bb02-9a25712d2212","Type":"ContainerStarted","Data":"5dbcbaf778250d0b18bb2f19dab01c57165692f21a151850181f8e36142ee2e4"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.994060 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" event={"ID":"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad","Type":"ContainerStarted","Data":"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.995091 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.007394 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kl9j4" event={"ID":"86acb693-c0d9-41f4-b33c-4716963ce268","Type":"ContainerStarted","Data":"758110cf0b46064de00bb150d4a98573f91f8fdf43e0f8ade86d25a387cec9db"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.009604 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" event={"ID":"86ac2c23-01e6-4a22-a79d-77a3269fb5a0","Type":"ContainerStarted","Data":"c422ab28c4be18f85caf1bf6a22eb9b1707b5f01f7e10b067205620fb2baacb7"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.011657 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" event={"ID":"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb","Type":"ContainerStarted","Data":"341235e6ea2901d1c63a118152a9dc368ad288a306e3bbde5a5f5fe867756e78"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.032069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" event={"ID":"b745a377-4575-45fb-a206-ea4754ecff76","Type":"ContainerStarted","Data":"0b0d5a92ec2b9f828ddc94573c379732dac2871a26ee02d0fa250bd34f099f95"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.036889 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" event={"ID":"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f","Type":"ContainerStarted","Data":"d01d1b820366e91afc7ff04d0a7a94c20c13e1911f2c5ca9eee7fe90727f6d77"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.038773 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.039066 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.539047279 +0000 UTC m=+105.799003738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.039598 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" event={"ID":"6742e18f-a187-4a77-a734-bdec89bd89e0","Type":"ContainerStarted","Data":"14286972e56053dbcb9d0135891d1dba55e7082cb155d960d866368fa5f331be"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.041038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qxzd9" event={"ID":"bb8fc8b3-9818-40e2-afb2-860e2d1efae1","Type":"ContainerStarted","Data":"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.042544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" event={"ID":"bc38f0b5-944c-40ae-aed0-50ca39ea2627","Type":"ContainerStarted","Data":"a91a58002d4d6f4f72bda9c7484e2bb65cd6b6f5f5601a84f2427afb828fb570"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.043753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" event={"ID":"863eda44-9a47-42de-b2de-49234ac647f0","Type":"ContainerStarted","Data":"35bc198bea517d29ac125c74b3ad165d16a4bb617772670696d3229c208e4dec"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.044118 4881 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-whh46 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" start-of-body= Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.044168 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.053472 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" event={"ID":"5f2944a8-8d91-4461-aa64-8908ca17f59e","Type":"ContainerStarted","Data":"1dce8a5c711904a4576ee0efa99a5227bad6330604f6039e5268181ffa724e4f"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.181336 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" podStartSLOduration=83.181307663 podStartE2EDuration="1m23.181307663s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.179881498 +0000 UTC m=+105.439837977" watchObservedRunningTime="2026-01-21 10:58:38.181307663 +0000 UTC m=+105.441264132" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.183063 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.186695 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.201955 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.70193728 +0000 UTC m=+105.961893939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.209110 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" event={"ID":"8465162e-dd9f-45b4-83a6-94666ac2b87b","Type":"ContainerStarted","Data":"fefa0e429b0c82f9f54c61490c4c91d30aeebb41b0d2233b56bd40f1ebf61528"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.232150 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:38 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:38 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:38 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.232219 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.249800 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" podStartSLOduration=84.249759344 podStartE2EDuration="1m24.249759344s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.23247161 +0000 UTC m=+105.492428079" watchObservedRunningTime="2026-01-21 10:58:38.249759344 +0000 UTC m=+105.509715813" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.255463 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.296553 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.297682 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dtv4t"] Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.298562 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" podStartSLOduration=82.298538793 podStartE2EDuration="1m22.298538793s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.297494317 +0000 UTC m=+105.557450796" watchObservedRunningTime="2026-01-21 10:58:38.298538793 +0000 UTC m=+105.558495262" Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.300137 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.798398109 +0000 UTC m=+106.058354588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.477010 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.477851 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.977841006 +0000 UTC m=+106.237797475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.479311 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" podStartSLOduration=83.479302002 podStartE2EDuration="1m23.479302002s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.47391316 +0000 UTC m=+105.733869629" watchObservedRunningTime="2026-01-21 10:58:38.479302002 +0000 UTC m=+105.739258471" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.536573 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-wrqpb" podStartSLOduration=83.536556069 podStartE2EDuration="1m23.536556069s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.529739341 +0000 UTC m=+105.789695830" watchObservedRunningTime="2026-01-21 10:58:38.536556069 +0000 UTC m=+105.796512528" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.565065 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" podStartSLOduration=82.565049818 podStartE2EDuration="1m22.565049818s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.564079164 +0000 UTC m=+105.824035633" watchObservedRunningTime="2026-01-21 10:58:38.565049818 +0000 UTC m=+105.825006287" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.578302 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.578988 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.07897283 +0000 UTC m=+106.338929299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.588927 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" podStartSLOduration=83.588910194 podStartE2EDuration="1m23.588910194s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.588375972 +0000 UTC m=+105.848332441" watchObservedRunningTime="2026-01-21 10:58:38.588910194 +0000 UTC m=+105.848866663" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.658767 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.680823 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.681143 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.181131189 +0000 UTC m=+106.441087658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: W0121 10:58:38.690288 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3552adbd_011f_4552_9e69_233b92c554c8.slice/crio-cb037e397d3c2f6ee7a3ec761c68c5d0ce2c3eb79704e242f2c5186055512710 WatchSource:0}: Error finding container cb037e397d3c2f6ee7a3ec761c68c5d0ce2c3eb79704e242f2c5186055512710: Status 404 returned error can't find the container with id cb037e397d3c2f6ee7a3ec761c68c5d0ce2c3eb79704e242f2c5186055512710 Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.782384 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.782712 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.282694274 +0000 UTC m=+106.542650743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.784011 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" podStartSLOduration=83.783986865 podStartE2EDuration="1m23.783986865s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.783148375 +0000 UTC m=+106.043104854" watchObservedRunningTime="2026-01-21 10:58:38.783986865 +0000 UTC m=+106.043943334" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.060080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.060452 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.560438536 +0000 UTC m=+106.820395005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.168051 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.168609 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.668585562 +0000 UTC m=+106.928542021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.168833 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.169432 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.669415672 +0000 UTC m=+106.929372141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.291048 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:39 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:39 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:39 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.291407 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.297002 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.297424 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.797405086 +0000 UTC m=+107.057361555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.364029 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" podStartSLOduration=84.364010482 podStartE2EDuration="1m24.364010482s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.363316554 +0000 UTC m=+106.623273023" watchObservedRunningTime="2026-01-21 10:58:39.364010482 +0000 UTC m=+106.623966951" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.393562 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" event={"ID":"86ac2c23-01e6-4a22-a79d-77a3269fb5a0","Type":"ContainerStarted","Data":"8b4cad29766b29072bceaa5b7cc7191e97f805f94716f7bca9f31541f11c8cd4"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.404159 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-znm6j" event={"ID":"b7e58845-f0a1-4320-b879-0765b6d57988","Type":"ContainerStarted","Data":"185130ada6207293ab7deb8a704c133c5228de3f527054bde6ae9d2ee08f16c1"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.405420 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" event={"ID":"3552adbd-011f-4552-9e69-233b92c554c8","Type":"ContainerStarted","Data":"cb037e397d3c2f6ee7a3ec761c68c5d0ce2c3eb79704e242f2c5186055512710"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.407412 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" event={"ID":"1e960def-7bc7-4041-94dc-8ccea63f8bb8","Type":"ContainerStarted","Data":"b44ff908b2f8dfd966c3bd6b0812f139b916876a910951331a7b4443a147daf2"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.410197 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" event={"ID":"c510b795-d750-4f94-bc9a-88ba625bd556","Type":"ContainerStarted","Data":"0c2e39fe292a484df8ff829c890024ee05cda0266b24499047cb35459ba7adc5"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.423256 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" event={"ID":"0ceebcd8-2c53-4e4d-97bb-5d81008a6442","Type":"ContainerStarted","Data":"0703ffdd4428db1abfe60d34b9c956929891c819f94b8271bf04da222464da4b"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.429930 4881 generic.go:334] "Generic (PLEG): container finished" podID="146cbde4-d891-47d8-a09f-d4f4b50bfe6d" containerID="baaf7152a0f657da62f5788c917e44ec25680b9897914479e48a0d080a327e47" exitCode=0 Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.430065 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" event={"ID":"146cbde4-d891-47d8-a09f-d4f4b50bfe6d","Type":"ContainerDied","Data":"baaf7152a0f657da62f5788c917e44ec25680b9897914479e48a0d080a327e47"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.432641 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.432987 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.932973995 +0000 UTC m=+107.192930464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.436173 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" event={"ID":"0007a585-5b17-44bd-89b8-2d1d233a03d4","Type":"ContainerStarted","Data":"1fea7f326694ba0a7adc23fea091401d4b3aa9790d8492a890004e28df288843"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.436321 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.442233 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qxzd9" podStartSLOduration=84.442223843 podStartE2EDuration="1m24.442223843s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.399235487 +0000 UTC m=+106.659191996" watchObservedRunningTime="2026-01-21 10:58:39.442223843 +0000 UTC m=+106.702180312" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.443000 4881 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-whh46 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" start-of-body= Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.443104 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.443886 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.443982 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.476050 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" podStartSLOduration=84.476028803 podStartE2EDuration="1m24.476028803s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.473718886 +0000 UTC m=+106.733675355" watchObservedRunningTime="2026-01-21 10:58:39.476028803 +0000 UTC m=+106.735985272" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.534344 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.537889 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.037868592 +0000 UTC m=+107.297825061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.638484 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.691883 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.191857074 +0000 UTC m=+107.451813543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.708304 4881 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zkkpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.708427 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" podUID="0007a585-5b17-44bd-89b8-2d1d233a03d4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.781704 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.782482 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.282450019 +0000 UTC m=+107.542406488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.895611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.896193 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.396175342 +0000 UTC m=+107.656131811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.896863 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" podStartSLOduration=84.896840928 podStartE2EDuration="1m24.896840928s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.697515873 +0000 UTC m=+106.957472342" watchObservedRunningTime="2026-01-21 10:58:39.896840928 +0000 UTC m=+107.156797397" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.898593 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" podStartSLOduration=84.898585431 podStartE2EDuration="1m24.898585431s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.519521391 +0000 UTC m=+106.779477870" watchObservedRunningTime="2026-01-21 10:58:39.898585431 +0000 UTC m=+107.158541900" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.951468 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" podStartSLOduration=84.951450449 podStartE2EDuration="1m24.951450449s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.949286747 +0000 UTC m=+107.209243216" watchObservedRunningTime="2026-01-21 10:58:39.951450449 +0000 UTC m=+107.211406918" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.002036 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.002391 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.502375281 +0000 UTC m=+107.762331760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.021996 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" podStartSLOduration=84.021966362 podStartE2EDuration="1m24.021966362s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.979480258 +0000 UTC m=+107.239436717" watchObservedRunningTime="2026-01-21 10:58:40.021966362 +0000 UTC m=+107.281922831" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.022610 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" podStartSLOduration=85.022602807 podStartE2EDuration="1m25.022602807s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:40.004702027 +0000 UTC m=+107.264658526" watchObservedRunningTime="2026-01-21 10:58:40.022602807 +0000 UTC m=+107.282559276" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.112578 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" podStartSLOduration=85.112546716 podStartE2EDuration="1m25.112546716s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:40.104142359 +0000 UTC m=+107.364098828" watchObservedRunningTime="2026-01-21 10:58:40.112546716 +0000 UTC m=+107.372503185" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.116033 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.116532 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.616514124 +0000 UTC m=+107.876470593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.122801 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:40 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:40 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:40 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.123024 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.159313 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" podStartSLOduration=85.159294455 podStartE2EDuration="1m25.159294455s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:40.141000406 +0000 UTC m=+107.400956875" watchObservedRunningTime="2026-01-21 10:58:40.159294455 +0000 UTC m=+107.419250924" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.216687 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.217571 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.717525495 +0000 UTC m=+107.977481964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.349975 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.350604 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.850586303 +0000 UTC m=+108.110542772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.452208 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.452531 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.952500876 +0000 UTC m=+108.212457345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.452739 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.453297 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.953279386 +0000 UTC m=+108.213235855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.512040 4881 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-whh46 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" start-of-body= Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.512099 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.517038 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.517114 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.517218 4881 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zkkpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.517237 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" podUID="0007a585-5b17-44bd-89b8-2d1d233a03d4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.555203 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.555614 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.055574018 +0000 UTC m=+108.315530527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.556385 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.561673 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.061645076 +0000 UTC m=+108.321601755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.657820 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.657964 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.157924801 +0000 UTC m=+108.417881270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.658451 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.659007 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.158988208 +0000 UTC m=+108.418944677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.860265 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.860422 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.360398224 +0000 UTC m=+108.620354693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.860884 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.861511 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.361483761 +0000 UTC m=+108.621440270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.031446 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.032009 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.531982569 +0000 UTC m=+108.791939038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.135537 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:41 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:41 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:41 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.135613 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.136873 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.137279 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.637264954 +0000 UTC m=+108.897221423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.237829 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.238202 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.738185203 +0000 UTC m=+108.998141672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.339514 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.340947 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.840936557 +0000 UTC m=+109.100893026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.443764 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.444296 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.944273894 +0000 UTC m=+109.204230363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.555051 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.555499 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.055472806 +0000 UTC m=+109.315429275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.564079 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" event={"ID":"2957ef21-9f30-4c81-8c6a-4a7f9d7315db","Type":"ContainerStarted","Data":"274060c852c28f0aa96e0ad4d532d1dcb9096dd3bcdb95eb1c0a740452bc99e2"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.580027 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" event={"ID":"7470431a-2a31-41ae-b021-510ae5e3c505","Type":"ContainerStarted","Data":"bbc24598d39fe0e64db70dc4aacf9d02d6d8b03d34f37bffa5a9aa3ec6f35658"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.599646 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" event={"ID":"b745a377-4575-45fb-a206-ea4754ecff76","Type":"ContainerStarted","Data":"b41967d3bdb4370227d82839dc1862e1f74b1c61b2e573915f3a2a8ab7402fa8"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.617442 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" event={"ID":"863eda44-9a47-42de-b2de-49234ac647f0","Type":"ContainerStarted","Data":"7f1e8826c38ff99f84057f36a7902286122626aa449227eefad6555a07039a2e"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.672266 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.672422 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.172390177 +0000 UTC m=+109.432346646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.672768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.673007 4881 generic.go:334] "Generic (PLEG): container finished" podID="303bdbe6-3bb4-4ace-86b1-f489c795580f" containerID="2f6a1a1e4268540ee682b58127eb41126b116ba4e30186b584ee325d0961ebec" exitCode=0 Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.673082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" event={"ID":"303bdbe6-3bb4-4ace-86b1-f489c795580f","Type":"ContainerDied","Data":"2f6a1a1e4268540ee682b58127eb41126b116ba4e30186b584ee325d0961ebec"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.673154 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" podStartSLOduration=86.673143306 podStartE2EDuration="1m26.673143306s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:41.672437859 +0000 UTC m=+108.932394318" watchObservedRunningTime="2026-01-21 10:58:41.673143306 +0000 UTC m=+108.933099775" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.673646 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.173633348 +0000 UTC m=+109.433589817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.685107 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" event={"ID":"e94f1e92-21b2-44c9-b499-b879850c288d","Type":"ContainerStarted","Data":"814fc7d7b657d30002e0169875973f3d65029d02d56ac8702f4d08fa12940079"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.686067 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.689384 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" event={"ID":"bc38f0b5-944c-40ae-aed0-50ca39ea2627","Type":"ContainerStarted","Data":"6503778a0e40497db90ff5d56281380f9d5aa7132b164aeb728970c4ece7f655"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.750268 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmq82 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.750318 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.755747 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" event={"ID":"5c8e7010-8b57-47ed-9270-417650a2a7c5","Type":"ContainerStarted","Data":"2b0d06dde0904501ce111fd57e37adca846e4da2eb029ea2a8db58ed1417d15d"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.757690 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" event={"ID":"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54","Type":"ContainerStarted","Data":"61ebc8fd525d43c2fed8d3c5eb147049c107d40ccc8ed9533e7103a63058c427"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.758417 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.763383 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kl9j4" event={"ID":"86acb693-c0d9-41f4-b33c-4716963ce268","Type":"ContainerStarted","Data":"35c9fd04d6545158e671724659b662ae119dc9bf1e2056a673d83d4e2c182473"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.765169 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" event={"ID":"5f2944a8-8d91-4461-aa64-8908ca17f59e","Type":"ContainerStarted","Data":"57795c377416a2b444b6643cd056439aca4bebab0c719d95342ddf54bfc67891"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.773984 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.775560 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.275540711 +0000 UTC m=+109.535497180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.833806 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7gdkq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.834165 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podUID="c56c4a24-e6c6-4aa0-8a62-94d3179dfe54" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.873130 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.873183 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.873140 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.873233 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.874922 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.875276 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.37526174 +0000 UTC m=+109.635218209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.875265 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podStartSLOduration=85.87525475 podStartE2EDuration="1m25.87525475s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:41.833572176 +0000 UTC m=+109.093528645" watchObservedRunningTime="2026-01-21 10:58:41.87525475 +0000 UTC m=+109.135211219" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.875581 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" podStartSLOduration=85.875567267 podStartE2EDuration="1m25.875567267s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:41.875381593 +0000 UTC m=+109.135338062" watchObservedRunningTime="2026-01-21 10:58:41.875567267 +0000 UTC m=+109.135523736" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.902855 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.902908 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.908070 4881 patch_prober.go:28] interesting pod/console-f9d7485db-qxzd9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.908130 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qxzd9" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.976453 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.976950 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.476933188 +0000 UTC m=+109.736889657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.977082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.978887 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.478878705 +0000 UTC m=+109.738835174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.078177 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.078505 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.578490641 +0000 UTC m=+109.838447110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.087470 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" podStartSLOduration=86.087448402 podStartE2EDuration="1m26.087448402s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:41.933212034 +0000 UTC m=+109.193168503" watchObservedRunningTime="2026-01-21 10:58:42.087448402 +0000 UTC m=+109.347404871" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.132276 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" podStartSLOduration=87.132257153 podStartE2EDuration="1m27.132257153s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:42.086092279 +0000 UTC m=+109.346048748" watchObservedRunningTime="2026-01-21 10:58:42.132257153 +0000 UTC m=+109.392213622" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.134397 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.146001 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:42 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:42 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:42 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.146060 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.181295 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podStartSLOduration=86.181277637 podStartE2EDuration="1m26.181277637s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:42.134264502 +0000 UTC m=+109.394220971" watchObservedRunningTime="2026-01-21 10:58:42.181277637 +0000 UTC m=+109.441234106" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.246425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.246757 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.746742045 +0000 UTC m=+110.006698514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.348287 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.348422 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.848399811 +0000 UTC m=+110.108356280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.348517 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.348867 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.848852882 +0000 UTC m=+110.108809351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.449624 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.450610 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.450803 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.950769245 +0000 UTC m=+110.210725714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.450919 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.451274 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.951267268 +0000 UTC m=+110.211223737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.551800 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.552678 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.052661668 +0000 UTC m=+110.312618137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.619055 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kl9j4" podStartSLOduration=13.619036268 podStartE2EDuration="13.619036268s" podCreationTimestamp="2026-01-21 10:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:42.188566635 +0000 UTC m=+109.448523104" watchObservedRunningTime="2026-01-21 10:58:42.619036268 +0000 UTC m=+109.878992737" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.653040 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.653366 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.153354671 +0000 UTC m=+110.413311140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.680811 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmq82 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.680885 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.687586 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7gdkq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.687639 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podUID="c56c4a24-e6c6-4aa0-8a62-94d3179dfe54" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688074 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7gdkq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688103 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podUID="c56c4a24-e6c6-4aa0-8a62-94d3179dfe54" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688324 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmq82 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688352 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688879 4881 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zkkpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688907 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" podUID="0007a585-5b17-44bd-89b8-2d1d233a03d4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688972 4881 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zkkpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688990 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" podUID="0007a585-5b17-44bd-89b8-2d1d233a03d4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.804261 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.804822 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.304802741 +0000 UTC m=+110.564759210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.881240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" event={"ID":"6742e18f-a187-4a77-a734-bdec89bd89e0","Type":"ContainerStarted","Data":"a55860c76dea8cc83448f2c5a84a34699b18e04ed2bd2c673062b583a1fe43b9"} Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.908011 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-znm6j" event={"ID":"b7e58845-f0a1-4320-b879-0765b6d57988","Type":"ContainerStarted","Data":"b74f6f2753b51931c8c7886efc96ad27509b6327f8db826907947ae3fa7e5941"} Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.908046 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.910571 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.910923 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.410911647 +0000 UTC m=+110.670868116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.911301 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmq82 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.911343 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.911747 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7gdkq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.911771 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podUID="c56c4a24-e6c6-4aa0-8a62-94d3179dfe54" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.014871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.016393 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.516373037 +0000 UTC m=+110.776329516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.117775 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.118104 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.618092666 +0000 UTC m=+110.878049135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.134506 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:43 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:43 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:43 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.134570 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.220740 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.221479 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.721460905 +0000 UTC m=+110.981417374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.261296 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" podStartSLOduration=88.261273872 podStartE2EDuration="1m28.261273872s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:43.067583685 +0000 UTC m=+110.327540154" watchObservedRunningTime="2026-01-21 10:58:43.261273872 +0000 UTC m=+110.521230341" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.322933 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.323707 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.823692455 +0000 UTC m=+111.083648924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.345389 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-znm6j" podStartSLOduration=14.345370437 podStartE2EDuration="14.345370437s" podCreationTimestamp="2026-01-21 10:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:43.262195705 +0000 UTC m=+110.522152184" watchObservedRunningTime="2026-01-21 10:58:43.345370437 +0000 UTC m=+110.605326906" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.423815 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.424235 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.924216815 +0000 UTC m=+111.184173284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.525138 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.525549 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.025532643 +0000 UTC m=+111.285489112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.636303 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.636579 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.136542079 +0000 UTC m=+111.396498548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.636864 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.637416 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.137404071 +0000 UTC m=+111.397360540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.737963 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.738141 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.238116204 +0000 UTC m=+111.498072673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.738572 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.738878 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.238865892 +0000 UTC m=+111.498822361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.840555 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.841864 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.341829562 +0000 UTC m=+111.601786041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.922512 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" event={"ID":"3552adbd-011f-4552-9e69-233b92c554c8","Type":"ContainerStarted","Data":"2f70c26dd006302ba39fd20f4edc424c87daa3fb0cb961652a77e27d4c4c5f81"} Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.924262 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" event={"ID":"409e44ed-8f6d-4321-9620-d8da23cf0fec","Type":"ContainerStarted","Data":"bf0cd8f2e1a07f1495e8b5070edd36bdf049cee20ed91cae8e65491224ad9404"} Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.926838 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" event={"ID":"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57","Type":"ContainerStarted","Data":"aa95308cf74bd69f9dd89eda71c93fb0b953f4273db045079776216eba82ac6c"} Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.930277 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" event={"ID":"8465162e-dd9f-45b4-83a6-94666ac2b87b","Type":"ContainerStarted","Data":"5346a63af1f87f1840ae91c7e61204fd86101b16375b797b124a38d2d1a4d526"} Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.934133 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" event={"ID":"7f30da15-7c75-4c87-9dc4-78653d6f84cd","Type":"ContainerStarted","Data":"bf541b970161b08f2e69d709b38c1f8215e1e67f2b3172fe3c3545b6f18c8d31"} Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.935253 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.949909 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.950455 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.450440749 +0000 UTC m=+111.710397218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.966614 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" podStartSLOduration=87.966595786 podStartE2EDuration="1m27.966595786s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:43.965498769 +0000 UTC m=+111.225455238" watchObservedRunningTime="2026-01-21 10:58:43.966595786 +0000 UTC m=+111.226552255" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.974872 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" event={"ID":"146cbde4-d891-47d8-a09f-d4f4b50bfe6d","Type":"ContainerStarted","Data":"7623aa552682368b5ab7546c7abf5426a9fc54a24390c180b2fd1c52a1fc3c59"} Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.006723 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" podStartSLOduration=88.006704741 podStartE2EDuration="1m28.006704741s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:44.004446365 +0000 UTC m=+111.264402834" watchObservedRunningTime="2026-01-21 10:58:44.006704741 +0000 UTC m=+111.266661210" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.008649 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" event={"ID":"2957ef21-9f30-4c81-8c6a-4a7f9d7315db","Type":"ContainerStarted","Data":"82cff7c637ca9ea34404cbbdd6bb09a799782c323f2954c300f85111c45a2087"} Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.009419 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.024994 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" event={"ID":"7470431a-2a31-41ae-b021-510ae5e3c505","Type":"ContainerStarted","Data":"d8d9070bb71902da921f2644b474d206bf23dae6634bd4a1926be15aaa2266a2"} Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.051365 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.051932 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.551913811 +0000 UTC m=+111.811870280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.058072 4881 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rdgn6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.058137 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" podUID="7f30da15-7c75-4c87-9dc4-78653d6f84cd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.075236 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" event={"ID":"b745a377-4575-45fb-a206-ea4754ecff76","Type":"ContainerStarted","Data":"3033d149a930a00978fa1ff937f61c5442e5512fd3248aab1dddf52694995bdd"} Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.081353 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" event={"ID":"537a87a4-8f58-441f-9199-62c5849c693c","Type":"ContainerStarted","Data":"f49722c43dfa54ea40a3a717b4d9f4d1e23fd65e4ceaaf2c1d50a6e52c41eba1"} Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.082024 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.082247 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmq82 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.082294 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.083241 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7gdkq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.083287 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podUID="c56c4a24-e6c6-4aa0-8a62-94d3179dfe54" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.153720 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.171560 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.671522969 +0000 UTC m=+111.931479478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.244642 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" podStartSLOduration=88.244614634 podStartE2EDuration="1m28.244614634s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:44.170222247 +0000 UTC m=+111.430178726" watchObservedRunningTime="2026-01-21 10:58:44.244614634 +0000 UTC m=+111.504571103" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.244930 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" podStartSLOduration=90.244924162 podStartE2EDuration="1m30.244924162s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:44.244061511 +0000 UTC m=+111.504017980" watchObservedRunningTime="2026-01-21 10:58:44.244924162 +0000 UTC m=+111.504880631" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.257778 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:44 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:44 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:44 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.257845 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.258266 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.259532 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.75951845 +0000 UTC m=+112.019474909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.361223 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.362250 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.862232333 +0000 UTC m=+112.122188802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.464246 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.464649 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.964633888 +0000 UTC m=+112.224590357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.480134 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" podStartSLOduration=89.480108418 podStartE2EDuration="1m29.480108418s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:44.308730479 +0000 UTC m=+111.568686968" watchObservedRunningTime="2026-01-21 10:58:44.480108418 +0000 UTC m=+111.740064877" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.580042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.580430 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.080412381 +0000 UTC m=+112.340368850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.632373 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.655119 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" podStartSLOduration=89.655103087 podStartE2EDuration="1m29.655103087s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:44.485220223 +0000 UTC m=+111.745176692" watchObservedRunningTime="2026-01-21 10:58:44.655103087 +0000 UTC m=+111.915059556" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.685146 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.685546 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.185529163 +0000 UTC m=+112.445485632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.786215 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume\") pod \"303bdbe6-3bb4-4ace-86b1-f489c795580f\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.786714 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l45nv\" (UniqueName: \"kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv\") pod \"303bdbe6-3bb4-4ace-86b1-f489c795580f\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.786977 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume\") pod \"303bdbe6-3bb4-4ace-86b1-f489c795580f\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.787190 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.787653 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.287637451 +0000 UTC m=+112.547593930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.794751 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume" (OuterVolumeSpecName: "config-volume") pod "303bdbe6-3bb4-4ace-86b1-f489c795580f" (UID: "303bdbe6-3bb4-4ace-86b1-f489c795580f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.821422 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv" (OuterVolumeSpecName: "kube-api-access-l45nv") pod "303bdbe6-3bb4-4ace-86b1-f489c795580f" (UID: "303bdbe6-3bb4-4ace-86b1-f489c795580f"). InnerVolumeSpecName "kube-api-access-l45nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.821844 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "303bdbe6-3bb4-4ace-86b1-f489c795580f" (UID: "303bdbe6-3bb4-4ace-86b1-f489c795580f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.894045 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.894417 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.894432 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.894442 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l45nv\" (UniqueName: \"kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.894509 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.394494586 +0000 UTC m=+112.654451055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.995192 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.995478 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.495467116 +0000 UTC m=+112.755423585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.096390 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.096578 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.596552909 +0000 UTC m=+112.856509378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.096622 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.097123 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.597107962 +0000 UTC m=+112.857064431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.122355 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:45 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:45 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:45 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.122425 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.122502 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" event={"ID":"303bdbe6-3bb4-4ace-86b1-f489c795580f","Type":"ContainerDied","Data":"b3d019b82236dd15b24f4a31ba5ebc67107e80ee3f592acc46c51b2bbe16aba5"} Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.122539 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3d019b82236dd15b24f4a31ba5ebc67107e80ee3f592acc46c51b2bbe16aba5" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.122609 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.136772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" event={"ID":"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57","Type":"ContainerStarted","Data":"f1c07b5b1a05d1bf9768ec195ee3a2c9acc9824cfa685e9f6db9da31ab9c0a77"} Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.146366 4881 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rdgn6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.146658 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" podUID="7f30da15-7c75-4c87-9dc4-78653d6f84cd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.147059 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" event={"ID":"3552adbd-011f-4552-9e69-233b92c554c8","Type":"ContainerStarted","Data":"4bb2b3e87d7c25e84c22d640a23b187cb954c20cb8555c8fc9006393fea81bd7"} Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.150206 4881 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rslv2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.150331 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" podUID="537a87a4-8f58-441f-9199-62c5849c693c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.185602 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dtv4t" podStartSLOduration=90.185584546 podStartE2EDuration="1m30.185584546s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:45.182077369 +0000 UTC m=+112.442033848" watchObservedRunningTime="2026-01-21 10:58:45.185584546 +0000 UTC m=+112.445541015" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.201438 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.201893 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.701876795 +0000 UTC m=+112.961833264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.227316 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.227503 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303bdbe6-3bb4-4ace-86b1-f489c795580f" containerName="collect-profiles" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.227514 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="303bdbe6-3bb4-4ace-86b1-f489c795580f" containerName="collect-profiles" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.227623 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="303bdbe6-3bb4-4ace-86b1-f489c795580f" containerName="collect-profiles" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.227974 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.233567 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.233735 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.248812 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" podStartSLOduration=89.248771318 podStartE2EDuration="1m29.248771318s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:45.233042311 +0000 UTC m=+112.492998770" watchObservedRunningTime="2026-01-21 10:58:45.248771318 +0000 UTC m=+112.508727817" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.251513 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.260453 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" podStartSLOduration=90.260431574 podStartE2EDuration="1m30.260431574s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:45.259271495 +0000 UTC m=+112.519227964" watchObservedRunningTime="2026-01-21 10:58:45.260431574 +0000 UTC m=+112.520388043" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.321975 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.322516 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.322672 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.332340 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.832325489 +0000 UTC m=+113.092281958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.423512 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.424070 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.924037852 +0000 UTC m=+113.183994311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.424407 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.424473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.424508 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.424936 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.924918813 +0000 UTC m=+113.184875282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.425261 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.474951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.544037 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.544434 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.044412519 +0000 UTC m=+113.304368988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.562050 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.646944 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.647331 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.147316266 +0000 UTC m=+113.407272735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.748089 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.749744 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.249722701 +0000 UTC m=+113.509679180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.850760 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.851305 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.351280196 +0000 UTC m=+113.611236665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.916520 4881 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.957160 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.957250 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.457221888 +0000 UTC m=+113.717178357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.957873 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.958344 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.458327135 +0000 UTC m=+113.718283604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.059395 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:46 crc kubenswrapper[4881]: E0121 10:58:46.059560 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.559528611 +0000 UTC m=+113.819485080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.059746 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:46 crc kubenswrapper[4881]: E0121 10:58:46.060080 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.560073224 +0000 UTC m=+113.820029693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.129902 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:46 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:46 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:46 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.130021 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.138578 4881 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T10:58:45.91657925Z","Handler":null,"Name":""} Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.140494 4881 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.140532 4881 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.160627 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.175556 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.231846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" event={"ID":"409e44ed-8f6d-4321-9620-d8da23cf0fec","Type":"ContainerStarted","Data":"c4b0f42b255ce85c83eb57dee5cfd3b3f516049e2da8fe43c690d7827b428eb3"} Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.232460 4881 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rslv2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.232529 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" podUID="537a87a4-8f58-441f-9199-62c5849c693c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.297834 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.310444 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.310517 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.401437 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.541140 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.551341 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.796519 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:58:46 crc kubenswrapper[4881]: W0121 10:58:46.817617 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbac3c741_e8bc_4059_8914_a6f834cee8dd.slice/crio-d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55 WatchSource:0}: Error finding container d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55: Status 404 returned error can't find the container with id d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55 Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.862923 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.864309 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.868015 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.868204 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.868375 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.870577 4881 patch_prober.go:28] interesting pod/apiserver-76f77b778f-svmbc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.870621 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" podUID="3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.900966 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.007100 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.007319 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.007485 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.007634 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42w8\" (UniqueName: \"kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.036576 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.038226 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.041057 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.063413 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.100464 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.104372 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.116640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.117028 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.117243 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g42w8\" (UniqueName: \"kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.118071 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.118252 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.123116 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:47 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:47 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:47 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.123172 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.136981 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.161726 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g42w8\" (UniqueName: \"kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.195121 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.219160 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.219230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.219318 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf89m\" (UniqueName: \"kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.239519 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2sqlm"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.240661 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.298860 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2sqlm"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.303504 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bac3c741-e8bc-4059-8914-a6f834cee8dd","Type":"ContainerStarted","Data":"d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55"} Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.305957 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" event={"ID":"409e44ed-8f6d-4321-9620-d8da23cf0fec","Type":"ContainerStarted","Data":"8245115cf5cd1ff0788aba3d223fbe0052e99f64b818eb3fccd9c5e9e87ad2e4"} Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.320631 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf89m\" (UniqueName: \"kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.320986 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.321111 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.321832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.322944 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.357232 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.422564 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.422701 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.422768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrsm4\" (UniqueName: \"kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.434262 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.446986 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf89m\" (UniqueName: \"kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.451413 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6rmvm"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.452670 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.462525 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rmvm"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523486 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523546 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dkc\" (UniqueName: \"kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523603 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrsm4\" (UniqueName: \"kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523629 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523678 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523740 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.524272 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.524543 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.537113 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.586211 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrsm4\" (UniqueName: \"kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.607179 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.625630 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2dkc\" (UniqueName: \"kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.625712 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.625765 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.627594 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.638082 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.665327 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.686660 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2dkc\" (UniqueName: \"kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.727014 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.814686 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.835746 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.170003 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:48 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:48 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:48 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.170071 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.332243 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" event={"ID":"409e44ed-8f6d-4321-9620-d8da23cf0fec","Type":"ContainerStarted","Data":"9f7a161cdf8f6dfa4d2425914e51e1e5b1421a4f039da2cadabde7c7bee8b711"} Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.334753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bac3c741-e8bc-4059-8914-a6f834cee8dd","Type":"ContainerStarted","Data":"18ebf1075ea1988b5e7d28c03859275c513980ea48c6783e2ceaeba7f10417b0"} Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.337885 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" event={"ID":"ec369bed-0b60-48b0-9de0-fcfd6ca7776d","Type":"ContainerStarted","Data":"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c"} Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.337922 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" event={"ID":"ec369bed-0b60-48b0-9de0-fcfd6ca7776d","Type":"ContainerStarted","Data":"5474c3ee513cde1d48c15d56d09e1c7f705a56319c7e90c496d397eeca80a458"} Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.338031 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.339723 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerStarted","Data":"a5c87f9c9c2e9ea53443d498b2b01400a8b6111456d79eeb2d2d4b28aa714ca1"} Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.389249 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" podStartSLOduration=19.389233491 podStartE2EDuration="19.389233491s" podCreationTimestamp="2026-01-21 10:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:48.384960676 +0000 UTC m=+115.644917155" watchObservedRunningTime="2026-01-21 10:58:48.389233491 +0000 UTC m=+115.649189960" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.412330 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" podStartSLOduration=93.412315938 podStartE2EDuration="1m33.412315938s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:48.410998545 +0000 UTC m=+115.670955004" watchObservedRunningTime="2026-01-21 10:58:48.412315938 +0000 UTC m=+115.672272407" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.434397 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.43437378 podStartE2EDuration="3.43437378s" podCreationTimestamp="2026-01-21 10:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:48.43030344 +0000 UTC m=+115.690259909" watchObservedRunningTime="2026-01-21 10:58:48.43437378 +0000 UTC m=+115.694330249" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.518908 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2sqlm"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.543296 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.586064 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rmvm"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.824655 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.827147 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.832032 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.853006 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.910355 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2qtc\" (UniqueName: \"kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.910445 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.910541 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.011104 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.011225 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2qtc\" (UniqueName: \"kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.011255 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.011965 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.011991 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.035281 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2qtc\" (UniqueName: \"kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.114917 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:49 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:49 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:49 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.115040 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.232343 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.234816 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.259947 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.373293 4881 generic.go:334] "Generic (PLEG): container finished" podID="bac3c741-e8bc-4059-8914-a6f834cee8dd" containerID="18ebf1075ea1988b5e7d28c03859275c513980ea48c6783e2ceaeba7f10417b0" exitCode=0 Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.373373 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bac3c741-e8bc-4059-8914-a6f834cee8dd","Type":"ContainerDied","Data":"18ebf1075ea1988b5e7d28c03859275c513980ea48c6783e2ceaeba7f10417b0"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.384684 4881 generic.go:334] "Generic (PLEG): container finished" podID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerID="21ab48233ffe1978a9c9e6217e5905832c0304da6f07fa2e19daa5ca75ac0da7" exitCode=0 Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.384818 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerDied","Data":"21ab48233ffe1978a9c9e6217e5905832c0304da6f07fa2e19daa5ca75ac0da7"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.384869 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerStarted","Data":"c3a0b0298aa8ab878f3e521eb0f166ff0e56c334391018119468d1c2b03f0be9"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.388216 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.393481 4881 generic.go:334] "Generic (PLEG): container finished" podID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerID="8f66d538b15eac6e19eeb1b6e73b0917e7cb4600d289674a11496b4ddb805259" exitCode=0 Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.403832 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerDied","Data":"8f66d538b15eac6e19eeb1b6e73b0917e7cb4600d289674a11496b4ddb805259"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.403904 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerStarted","Data":"79b5df43169324987a329525742a5078ed6a8e75640eab433d3baf2cf413407f"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.404667 4881 generic.go:334] "Generic (PLEG): container finished" podID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerID="5aed93291404e255299931c1a9f3a011b1cb4d3b3ce796db1f1b3e7ec12c142e" exitCode=0 Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.404766 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerDied","Data":"5aed93291404e255299931c1a9f3a011b1cb4d3b3ce796db1f1b3e7ec12c142e"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.404809 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerStarted","Data":"06bab0b00f0f71fd0a092b84dfd550234e778896541edbd10dbb4f1a0cb5d5b8"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.411015 4881 generic.go:334] "Generic (PLEG): container finished" podID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerID="1ccb96495e693b437b8f3969fa58a55b9e7011c267f14a44820d1cfd34daabf3" exitCode=0 Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.412255 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerDied","Data":"1ccb96495e693b437b8f3969fa58a55b9e7011c267f14a44820d1cfd34daabf3"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.518466 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.519233 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.534866 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.538886 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.555833 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.628645 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.628845 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.628925 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56ld\" (UniqueName: \"kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.692302 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.730460 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.730884 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.730906 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.730930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.730983 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b56ld\" (UniqueName: \"kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.731648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.733722 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.793587 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b56ld\" (UniqueName: \"kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.831867 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.831913 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.832056 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.853180 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.027484 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.028755 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.041771 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.075607 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.075962 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.120941 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:50 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:50 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:50 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.121026 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.144640 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc6f2\" (UniqueName: \"kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.144742 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.145005 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.159979 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.246376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc6f2\" (UniqueName: \"kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.246436 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.246469 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.246914 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.247378 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.288032 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc6f2\" (UniqueName: \"kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.361393 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.429420 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.430487 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.469925 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.558491 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn5jn\" (UniqueName: \"kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.558558 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.558611 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.659687 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn5jn\" (UniqueName: \"kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.660004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.660042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.660762 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.661300 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.665634 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.669720 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.691798 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn5jn\" (UniqueName: \"kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.772470 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.928925 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.987033 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.114753 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:51 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:51 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:51 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.115358 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.175710 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.176618 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 10:58:51 crc kubenswrapper[4881]: W0121 10:58:51.186380 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd318e830_067f_4722_9d74_a45fcefc939d.slice/crio-b87ddedd309d60e82b2425e90c86377b7db5b6d93701316fb318e5a216d01095 WatchSource:0}: Error finding container b87ddedd309d60e82b2425e90c86377b7db5b6d93701316fb318e5a216d01095: Status 404 returned error can't find the container with id b87ddedd309d60e82b2425e90c86377b7db5b6d93701316fb318e5a216d01095 Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.323298 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access\") pod \"bac3c741-e8bc-4059-8914-a6f834cee8dd\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.323978 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir\") pod \"bac3c741-e8bc-4059-8914-a6f834cee8dd\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.324301 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bac3c741-e8bc-4059-8914-a6f834cee8dd" (UID: "bac3c741-e8bc-4059-8914-a6f834cee8dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.374649 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bac3c741-e8bc-4059-8914-a6f834cee8dd" (UID: "bac3c741-e8bc-4059-8914-a6f834cee8dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.427498 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.427607 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.480034 4881 generic.go:334] "Generic (PLEG): container finished" podID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerID="ec4a8cdf9092080c2fbbc3ac32eca21f15705f2f8424796b41499693e29b4095" exitCode=0 Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.521933 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerDied","Data":"ec4a8cdf9092080c2fbbc3ac32eca21f15705f2f8424796b41499693e29b4095"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.521970 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerStarted","Data":"eb22a93b2892f0c51c953eb6eb827724775592dd8224db01464d1014b0260e0e"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.521984 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerStarted","Data":"b87ddedd309d60e82b2425e90c86377b7db5b6d93701316fb318e5a216d01095"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.521995 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"82118904-aa61-43ac-968f-283dc807d0c9","Type":"ContainerStarted","Data":"b26c5cdd64634480b84bf6f21afe37c6fbfc185f021cc85c79dec71325038fa3"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.524439 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.524480 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bac3c741-e8bc-4059-8914-a6f834cee8dd","Type":"ContainerDied","Data":"d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.524531 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.627510 4881 generic.go:334] "Generic (PLEG): container finished" podID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerID="aa990b30489b423fbac7484510b784c9211e2f63bd3366b894aa031bc0754115" exitCode=0 Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.627631 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerDied","Data":"aa990b30489b423fbac7484510b784c9211e2f63bd3366b894aa031bc0754115"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.627721 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerStarted","Data":"97ca6fad994e892affd0e053e6d3515afda4b44ce01474758415dca871d6c00b"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.707819 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 10:58:51 crc kubenswrapper[4881]: W0121 10:58:51.741065 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb83e71f8_970c_4afc_ac31_264c7ca6625a.slice/crio-16d7bf5b9f969471865c2f6c0d0043006c1b79484bd1c97e826d3a03374ea542 WatchSource:0}: Error finding container 16d7bf5b9f969471865c2f6c0d0043006c1b79484bd1c97e826d3a03374ea542: Status 404 returned error can't find the container with id 16d7bf5b9f969471865c2f6c0d0043006c1b79484bd1c97e826d3a03374ea542 Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.870599 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.870670 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.870822 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.870899 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.877485 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.882594 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.933064 4881 patch_prober.go:28] interesting pod/console-f9d7485db-qxzd9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.933163 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qxzd9" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.134218 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:52 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:52 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:52 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.134270 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.557479 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.561793 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.562570 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.666390 4881 generic.go:334] "Generic (PLEG): container finished" podID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerID="ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593" exitCode=0 Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.666490 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerDied","Data":"ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593"} Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.666528 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerStarted","Data":"16d7bf5b9f969471865c2f6c0d0043006c1b79484bd1c97e826d3a03374ea542"} Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.683400 4881 generic.go:334] "Generic (PLEG): container finished" podID="d318e830-067f-4722-9d74-a45fcefc939d" containerID="b9a009384ba81492213bce1a87a61e1b83f262354a9aea725ad849bc0749a5f7" exitCode=0 Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.683546 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerDied","Data":"b9a009384ba81492213bce1a87a61e1b83f262354a9aea725ad849bc0749a5f7"} Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.699596 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"82118904-aa61-43ac-968f-283dc807d0c9","Type":"ContainerStarted","Data":"d835f915c7d824c5b21ac8719e0140dad6bbeb5334b91bb6f7250e0eba251ba9"} Jan 21 10:58:53 crc kubenswrapper[4881]: I0121 10:58:53.250220 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:53 crc kubenswrapper[4881]: I0121 10:58:53.538890 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:53 crc kubenswrapper[4881]: I0121 10:58:53.557029 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.557002337 podStartE2EDuration="4.557002337s" podCreationTimestamp="2026-01-21 10:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:52.784435442 +0000 UTC m=+120.044391911" watchObservedRunningTime="2026-01-21 10:58:53.557002337 +0000 UTC m=+120.816958806" Jan 21 10:58:54 crc kubenswrapper[4881]: I0121 10:58:54.936769 4881 generic.go:334] "Generic (PLEG): container finished" podID="82118904-aa61-43ac-968f-283dc807d0c9" containerID="d835f915c7d824c5b21ac8719e0140dad6bbeb5334b91bb6f7250e0eba251ba9" exitCode=0 Jan 21 10:58:54 crc kubenswrapper[4881]: I0121 10:58:54.936907 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"82118904-aa61-43ac-968f-283dc807d0c9","Type":"ContainerDied","Data":"d835f915c7d824c5b21ac8719e0140dad6bbeb5334b91bb6f7250e0eba251ba9"} Jan 21 10:58:55 crc kubenswrapper[4881]: I0121 10:58:55.978829 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-phm68_b745a377-4575-45fb-a206-ea4754ecff76/cluster-samples-operator/0.log" Jan 21 10:58:55 crc kubenswrapper[4881]: I0121 10:58:55.979107 4881 generic.go:334] "Generic (PLEG): container finished" podID="b745a377-4575-45fb-a206-ea4754ecff76" containerID="b41967d3bdb4370227d82839dc1862e1f74b1c61b2e573915f3a2a8ab7402fa8" exitCode=2 Jan 21 10:58:55 crc kubenswrapper[4881]: I0121 10:58:55.979316 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" event={"ID":"b745a377-4575-45fb-a206-ea4754ecff76","Type":"ContainerDied","Data":"b41967d3bdb4370227d82839dc1862e1f74b1c61b2e573915f3a2a8ab7402fa8"} Jan 21 10:58:55 crc kubenswrapper[4881]: I0121 10:58:55.980134 4881 scope.go:117] "RemoveContainer" containerID="b41967d3bdb4370227d82839dc1862e1f74b1c61b2e573915f3a2a8ab7402fa8" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.005135 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-phm68_b745a377-4575-45fb-a206-ea4754ecff76/cluster-samples-operator/0.log" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.005753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" event={"ID":"b745a377-4575-45fb-a206-ea4754ecff76","Type":"ContainerStarted","Data":"a88d091e94ff32e45195f85298f3f39e99eee297d0dc561dddf06b5b92b18ab6"} Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.494338 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.625291 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access\") pod \"82118904-aa61-43ac-968f-283dc807d0c9\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.625498 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir\") pod \"82118904-aa61-43ac-968f-283dc807d0c9\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.625894 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "82118904-aa61-43ac-968f-283dc807d0c9" (UID: "82118904-aa61-43ac-968f-283dc807d0c9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.718799 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "82118904-aa61-43ac-968f-283dc807d0c9" (UID: "82118904-aa61-43ac-968f-283dc807d0c9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.745646 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.745712 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:58 crc kubenswrapper[4881]: I0121 10:58:58.053941 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"82118904-aa61-43ac-968f-283dc807d0c9","Type":"ContainerDied","Data":"b26c5cdd64634480b84bf6f21afe37c6fbfc185f021cc85c79dec71325038fa3"} Jan 21 10:58:58 crc kubenswrapper[4881]: I0121 10:58:58.054402 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26c5cdd64634480b84bf6f21afe37c6fbfc185f021cc85c79dec71325038fa3" Jan 21 10:58:58 crc kubenswrapper[4881]: I0121 10:58:58.054117 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.866867 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.867222 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.867278 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.867351 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.867400 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.868455 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.868480 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.869097 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"8ac6e934bf2c65c273e37127eb78e3c49f6ab743027f68c7c31810cbe67f929a"} pod="openshift-console/downloads-7954f5f757-wrqpb" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.869195 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" containerID="cri-o://8ac6e934bf2c65c273e37127eb78e3c49f6ab743027f68c7c31810cbe67f929a" gracePeriod=2 Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.993554 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.998504 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:59:02 crc kubenswrapper[4881]: I0121 10:59:02.250750 4881 generic.go:334] "Generic (PLEG): container finished" podID="628cb8f4-a587-498f-9398-403e0af5eec4" containerID="8ac6e934bf2c65c273e37127eb78e3c49f6ab743027f68c7c31810cbe67f929a" exitCode=0 Jan 21 10:59:02 crc kubenswrapper[4881]: I0121 10:59:02.250990 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wrqpb" event={"ID":"628cb8f4-a587-498f-9398-403e0af5eec4","Type":"ContainerDied","Data":"8ac6e934bf2c65c273e37127eb78e3c49f6ab743027f68c7c31810cbe67f929a"} Jan 21 10:59:06 crc kubenswrapper[4881]: I0121 10:59:06.681982 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:59:11 crc kubenswrapper[4881]: I0121 10:59:11.877261 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:11 crc kubenswrapper[4881]: I0121 10:59:11.877678 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:21 crc kubenswrapper[4881]: I0121 10:59:21.866673 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:21 crc kubenswrapper[4881]: I0121 10:59:21.867803 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:22 crc kubenswrapper[4881]: I0121 10:59:22.551268 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.069754 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.070334 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.072770 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.073110 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.081353 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.089468 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.171821 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.171922 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.174599 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.185513 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.199880 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.203140 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.228226 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.430347 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.469196 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.120887 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:59:27 crc kubenswrapper[4881]: E0121 10:59:27.122169 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82118904-aa61-43ac-968f-283dc807d0c9" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.122189 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="82118904-aa61-43ac-968f-283dc807d0c9" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: E0121 10:59:27.122207 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac3c741-e8bc-4059-8914-a6f834cee8dd" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.122214 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac3c741-e8bc-4059-8914-a6f834cee8dd" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.122482 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="82118904-aa61-43ac-968f-283dc807d0c9" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.122499 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac3c741-e8bc-4059-8914-a6f834cee8dd" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.124820 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.130840 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.131436 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.131604 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.174258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.174369 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.275561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.275639 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.276377 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.305595 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.455043 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:29 crc kubenswrapper[4881]: I0121 10:59:29.851451 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:59:29 crc kubenswrapper[4881]: I0121 10:59:29.852189 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:59:31 crc kubenswrapper[4881]: I0121 10:59:31.868392 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:31 crc kubenswrapper[4881]: I0121 10:59:31.868501 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.526212 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.527466 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.539973 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.649228 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.649481 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.649548 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.750557 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.750930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.751053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.751202 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.751235 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.801261 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.866160 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:34 crc kubenswrapper[4881]: E0121 10:59:34.437211 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 10:59:34 crc kubenswrapper[4881]: E0121 10:59:34.439056 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mf89m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-v5n2s_openshift-marketplace(e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:34 crc kubenswrapper[4881]: E0121 10:59:34.440340 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-v5n2s" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.824165 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-v5n2s" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.917016 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.917565 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b56ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vljfh_openshift-marketplace(1d66b837-f7b1-4795-895f-08cdabe48b37): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.918766 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vljfh" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.929603 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.929826 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2dkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6rmvm_openshift-marketplace(2c460bf5-05a1-4977-b889-1a5c3263df33): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.931021 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6rmvm" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" Jan 21 10:59:39 crc kubenswrapper[4881]: E0121 10:59:39.805364 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6rmvm" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" Jan 21 10:59:39 crc kubenswrapper[4881]: E0121 10:59:39.805845 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vljfh" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" Jan 21 10:59:39 crc kubenswrapper[4881]: E0121 10:59:39.896059 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 10:59:39 crc kubenswrapper[4881]: E0121 10:59:39.896769 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sn5jn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-t4zlb_openshift-marketplace(b83e71f8-970c-4afc-ac31-264c7ca6625a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:39 crc kubenswrapper[4881]: E0121 10:59:39.898028 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-t4zlb" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.263857 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-t4zlb" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.360924 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.361206 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2qtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-89m75_openshift-marketplace(075db786-6ad0-4982-b70e-bd05d4f240ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.362633 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-89m75" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.368414 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.369221 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrsm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2sqlm_openshift-marketplace(5b12596d-1f5f-4d81-b664-d0ddee72552c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.370534 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2sqlm" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.374138 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.374525 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fc6f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kfmhs_openshift-marketplace(d318e830-067f-4722-9d74-a45fcefc939d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.376752 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kfmhs" podUID="d318e830-067f-4722-9d74-a45fcefc939d" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.420297 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.420561 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g42w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q6dn5_openshift-marketplace(8e002e57-13ab-477a-9e16-980e13b5e47f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.423812 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-q6dn5" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.867146 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.867667 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.897475 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wrqpb" event={"ID":"628cb8f4-a587-498f-9398-403e0af5eec4","Type":"ContainerStarted","Data":"e6fdfddd04f97ac6678436a8d986fc15a9f59365abe393ade8c3fd53ab3ad81b"} Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.897955 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.897987 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.898256 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.898701 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q6dn5" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.898887 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2sqlm" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.900614 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kfmhs" podUID="d318e830-067f-4722-9d74-a45fcefc939d" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.902306 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-89m75" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.082064 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.086054 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:59:42 crc kubenswrapper[4881]: W0121 10:59:42.117690 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod297f4cbb_3661_40d1_bfe7_518b3f934f71.slice/crio-63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67 WatchSource:0}: Error finding container 63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67: Status 404 returned error can't find the container with id 63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67 Jan 21 10:59:42 crc kubenswrapper[4881]: W0121 10:59:42.245893 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-0b0bdf164de368a9f532c9b0db44e3257336e81a051a6283ac30c242895ceccc WatchSource:0}: Error finding container 0b0bdf164de368a9f532c9b0db44e3257336e81a051a6283ac30c242895ceccc: Status 404 returned error can't find the container with id 0b0bdf164de368a9f532c9b0db44e3257336e81a051a6283ac30c242895ceccc Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.904525 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"297f4cbb-3661-40d1-bfe7-518b3f934f71","Type":"ContainerStarted","Data":"63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.912105 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1b7183540119c8b9eee168945b8926646499506cc41c32a7e3cafc30f0b2a739"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.912368 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"599b13afa3de1c32ea39de784508c7665fb436ae053e169463ce8f7cfbb59252"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.916039 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1813207fad30a5540c33f13fba6fda53d19e46ec4d3fa140eb5d8aadc76e5e13"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.916150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d5cae0b10345945d3ec1ab0c087a08e8a2a69d10408227202319ed641a01f0d5"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.916620 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.922231 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41bc4c78-71b2-4ca1-b593-410715cb877b","Type":"ContainerStarted","Data":"442a9d2a13a72bc50a93b9b5088365fc2ff7f17c8a181731060f8bf93fd639fd"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.924879 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1d58c5397e8729c1268d44dec4fc932a9d2409e8f205f79d1712c41ff66ce64d"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.924959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0b0bdf164de368a9f532c9b0db44e3257336e81a051a6283ac30c242895ceccc"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.925921 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.926079 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:47 crc kubenswrapper[4881]: I0121 10:59:47.965274 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41bc4c78-71b2-4ca1-b593-410715cb877b","Type":"ContainerStarted","Data":"891a9148acd513d44e13545e811cd63c09e7d52344359f98044e5a82a847b9a1"} Jan 21 10:59:49 crc kubenswrapper[4881]: I0121 10:59:48.972972 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"297f4cbb-3661-40d1-bfe7-518b3f934f71","Type":"ContainerStarted","Data":"7cf64852b8e94a0c7baefe70b649fd9a1474d6d2a1a6df059f6227f5286ea94e"} Jan 21 10:59:49 crc kubenswrapper[4881]: I0121 10:59:48.992617 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=21.992593214 podStartE2EDuration="21.992593214s" podCreationTimestamp="2026-01-21 10:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:48.989964049 +0000 UTC m=+176.249920518" watchObservedRunningTime="2026-01-21 10:59:48.992593214 +0000 UTC m=+176.252549683" Jan 21 10:59:49 crc kubenswrapper[4881]: I0121 10:59:49.012408 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=17.01198806 podStartE2EDuration="17.01198806s" podCreationTimestamp="2026-01-21 10:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:49.011582241 +0000 UTC m=+176.271538720" watchObservedRunningTime="2026-01-21 10:59:49.01198806 +0000 UTC m=+176.271944529" Jan 21 10:59:50 crc kubenswrapper[4881]: I0121 10:59:50.991041 4881 generic.go:334] "Generic (PLEG): container finished" podID="297f4cbb-3661-40d1-bfe7-518b3f934f71" containerID="7cf64852b8e94a0c7baefe70b649fd9a1474d6d2a1a6df059f6227f5286ea94e" exitCode=0 Jan 21 10:59:50 crc kubenswrapper[4881]: I0121 10:59:50.991135 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"297f4cbb-3661-40d1-bfe7-518b3f934f71","Type":"ContainerDied","Data":"7cf64852b8e94a0c7baefe70b649fd9a1474d6d2a1a6df059f6227f5286ea94e"} Jan 21 10:59:51 crc kubenswrapper[4881]: I0121 10:59:51.868581 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:51 crc kubenswrapper[4881]: I0121 10:59:51.869145 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:51 crc kubenswrapper[4881]: I0121 10:59:51.868694 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:51 crc kubenswrapper[4881]: I0121 10:59:51.869331 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.314143 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.447978 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir\") pod \"297f4cbb-3661-40d1-bfe7-518b3f934f71\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.448097 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access\") pod \"297f4cbb-3661-40d1-bfe7-518b3f934f71\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.448124 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "297f4cbb-3661-40d1-bfe7-518b3f934f71" (UID: "297f4cbb-3661-40d1-bfe7-518b3f934f71"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.448758 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.457004 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "297f4cbb-3661-40d1-bfe7-518b3f934f71" (UID: "297f4cbb-3661-40d1-bfe7-518b3f934f71"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.650424 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:55 crc kubenswrapper[4881]: I0121 10:59:55.021962 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"297f4cbb-3661-40d1-bfe7-518b3f934f71","Type":"ContainerDied","Data":"63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67"} Jan 21 10:59:55 crc kubenswrapper[4881]: I0121 10:59:55.022472 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67" Jan 21 10:59:55 crc kubenswrapper[4881]: I0121 10:59:55.022077 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:56 crc kubenswrapper[4881]: I0121 10:59:56.029581 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerStarted","Data":"af52521bc076413d8e72a4c4cff88c04fc3be6a74567d99416c9a8f9f7a66758"} Jan 21 10:59:59 crc kubenswrapper[4881]: I0121 10:59:59.124414 4881 generic.go:334] "Generic (PLEG): container finished" podID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerID="af52521bc076413d8e72a4c4cff88c04fc3be6a74567d99416c9a8f9f7a66758" exitCode=0 Jan 21 10:59:59 crc kubenswrapper[4881]: I0121 10:59:59.124735 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerDied","Data":"af52521bc076413d8e72a4c4cff88c04fc3be6a74567d99416c9a8f9f7a66758"} Jan 21 10:59:59 crc kubenswrapper[4881]: I0121 10:59:59.850870 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:59:59 crc kubenswrapper[4881]: I0121 10:59:59.851208 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.150598 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb"] Jan 21 11:00:00 crc kubenswrapper[4881]: E0121 11:00:00.150870 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297f4cbb-3661-40d1-bfe7-518b3f934f71" containerName="pruner" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.150883 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="297f4cbb-3661-40d1-bfe7-518b3f934f71" containerName="pruner" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.150995 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="297f4cbb-3661-40d1-bfe7-518b3f934f71" containerName="pruner" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.151385 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.155699 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.156102 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.279907 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.280110 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.280177 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2rdm\" (UniqueName: \"kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.308853 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb"] Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.381777 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2rdm\" (UniqueName: \"kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.381919 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.381994 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.383802 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.394982 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.405444 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2rdm\" (UniqueName: \"kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.470013 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.356039 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerStarted","Data":"cad9f8570b6b7c8359172ebecd350bcad67cfe5e05e5aeca3f0a038ec3357bb5"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.359929 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerStarted","Data":"a06c8d6c70785e0e51b0e238072a99f6a50caf04a590fb7ba69cc08788ffee9a"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.545354 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerStarted","Data":"87b3da4f38a8247ed7dbb2b11f2ec14c16c71eee1d17657bf85f241bc0e931f6"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.644273 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerStarted","Data":"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.652993 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerStarted","Data":"db0493653bc30919d4352c24df01a207c2de62ad8f1fa10ff346fcc988a5549e"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.655071 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerStarted","Data":"456438ece135082aa65a1f9d3e1df54da4ad18d3ac41d1e2ac75d98b61443cef"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.902823 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.902896 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.903257 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.903288 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 11:00:02 crc kubenswrapper[4881]: I0121 11:00:02.754549 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerStarted","Data":"091b8c7421a6daba2d38abc6600200f92a99a9d9fffb2a18673337cc1cab5a28"} Jan 21 11:00:02 crc kubenswrapper[4881]: I0121 11:00:02.868975 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v5n2s" podStartSLOduration=4.649819084 podStartE2EDuration="1m15.868935322s" podCreationTimestamp="2026-01-21 10:58:47 +0000 UTC" firstStartedPulling="2026-01-21 10:58:49.395434554 +0000 UTC m=+116.655391023" lastFinishedPulling="2026-01-21 11:00:00.614550792 +0000 UTC m=+187.874507261" observedRunningTime="2026-01-21 11:00:02.864735729 +0000 UTC m=+190.124692218" watchObservedRunningTime="2026-01-21 11:00:02.868935322 +0000 UTC m=+190.128891781" Jan 21 11:00:02 crc kubenswrapper[4881]: I0121 11:00:02.891584 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb"] Jan 21 11:00:04 crc kubenswrapper[4881]: I0121 11:00:04.312772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" event={"ID":"65c09a3a-6389-443c-888b-fe83557dd508","Type":"ContainerStarted","Data":"e7078195838c011ba41af3c83e6d88fadf75d4028c7c8f34237503be20319141"} Jan 21 11:00:05 crc kubenswrapper[4881]: I0121 11:00:05.489849 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" event={"ID":"65c09a3a-6389-443c-888b-fe83557dd508","Type":"ContainerStarted","Data":"506baee9263f2e28d3f1ef1ef645da28ead83f7c212d5255ebc44d13c43d15f7"} Jan 21 11:00:05 crc kubenswrapper[4881]: I0121 11:00:05.513428 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" podStartSLOduration=5.513381962 podStartE2EDuration="5.513381962s" podCreationTimestamp="2026-01-21 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:00:05.512343466 +0000 UTC m=+192.772299935" watchObservedRunningTime="2026-01-21 11:00:05.513381962 +0000 UTC m=+192.773338431" Jan 21 11:00:06 crc kubenswrapper[4881]: I0121 11:00:06.588179 4881 generic.go:334] "Generic (PLEG): container finished" podID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerID="a06c8d6c70785e0e51b0e238072a99f6a50caf04a590fb7ba69cc08788ffee9a" exitCode=0 Jan 21 11:00:06 crc kubenswrapper[4881]: I0121 11:00:06.588259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerDied","Data":"a06c8d6c70785e0e51b0e238072a99f6a50caf04a590fb7ba69cc08788ffee9a"} Jan 21 11:00:06 crc kubenswrapper[4881]: I0121 11:00:06.594996 4881 generic.go:334] "Generic (PLEG): container finished" podID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerID="87b3da4f38a8247ed7dbb2b11f2ec14c16c71eee1d17657bf85f241bc0e931f6" exitCode=0 Jan 21 11:00:06 crc kubenswrapper[4881]: I0121 11:00:06.595910 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerDied","Data":"87b3da4f38a8247ed7dbb2b11f2ec14c16c71eee1d17657bf85f241bc0e931f6"} Jan 21 11:00:07 crc kubenswrapper[4881]: I0121 11:00:07.665933 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:00:07 crc kubenswrapper[4881]: I0121 11:00:07.666089 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:00:08 crc kubenswrapper[4881]: I0121 11:00:08.782977 4881 generic.go:334] "Generic (PLEG): container finished" podID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerID="cad9f8570b6b7c8359172ebecd350bcad67cfe5e05e5aeca3f0a038ec3357bb5" exitCode=0 Jan 21 11:00:08 crc kubenswrapper[4881]: I0121 11:00:08.783056 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerDied","Data":"cad9f8570b6b7c8359172ebecd350bcad67cfe5e05e5aeca3f0a038ec3357bb5"} Jan 21 11:00:08 crc kubenswrapper[4881]: I0121 11:00:08.792165 4881 generic.go:334] "Generic (PLEG): container finished" podID="65c09a3a-6389-443c-888b-fe83557dd508" containerID="506baee9263f2e28d3f1ef1ef645da28ead83f7c212d5255ebc44d13c43d15f7" exitCode=0 Jan 21 11:00:08 crc kubenswrapper[4881]: I0121 11:00:08.792918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" event={"ID":"65c09a3a-6389-443c-888b-fe83557dd508","Type":"ContainerDied","Data":"506baee9263f2e28d3f1ef1ef645da28ead83f7c212d5255ebc44d13c43d15f7"} Jan 21 11:00:09 crc kubenswrapper[4881]: I0121 11:00:09.780989 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-v5n2s" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="registry-server" probeResult="failure" output=< Jan 21 11:00:09 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:00:09 crc kubenswrapper[4881]: > Jan 21 11:00:09 crc kubenswrapper[4881]: I0121 11:00:09.798463 4881 generic.go:334] "Generic (PLEG): container finished" podID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerID="db0493653bc30919d4352c24df01a207c2de62ad8f1fa10ff346fcc988a5549e" exitCode=0 Jan 21 11:00:09 crc kubenswrapper[4881]: I0121 11:00:09.798559 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerDied","Data":"db0493653bc30919d4352c24df01a207c2de62ad8f1fa10ff346fcc988a5549e"} Jan 21 11:00:12 crc kubenswrapper[4881]: I0121 11:00:12.462905 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.096908 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.204284 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume\") pod \"65c09a3a-6389-443c-888b-fe83557dd508\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.204562 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2rdm\" (UniqueName: \"kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm\") pod \"65c09a3a-6389-443c-888b-fe83557dd508\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.204650 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume\") pod \"65c09a3a-6389-443c-888b-fe83557dd508\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.234773 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume" (OuterVolumeSpecName: "config-volume") pod "65c09a3a-6389-443c-888b-fe83557dd508" (UID: "65c09a3a-6389-443c-888b-fe83557dd508"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.307048 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.337449 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm" (OuterVolumeSpecName: "kube-api-access-b2rdm") pod "65c09a3a-6389-443c-888b-fe83557dd508" (UID: "65c09a3a-6389-443c-888b-fe83557dd508"). InnerVolumeSpecName "kube-api-access-b2rdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.338052 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "65c09a3a-6389-443c-888b-fe83557dd508" (UID: "65c09a3a-6389-443c-888b-fe83557dd508"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.408913 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2rdm\" (UniqueName: \"kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.408945 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.595333 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.952475 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" event={"ID":"65c09a3a-6389-443c-888b-fe83557dd508","Type":"ContainerDied","Data":"e7078195838c011ba41af3c83e6d88fadf75d4028c7c8f34237503be20319141"} Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.952526 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7078195838c011ba41af3c83e6d88fadf75d4028c7c8f34237503be20319141" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.952590 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:14 crc kubenswrapper[4881]: I0121 11:00:14.960299 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerStarted","Data":"8c58e8e6d9f4309fce56e3b043abdb46d3d4af579c4a6d9ae43870620be9634e"} Jan 21 11:00:16 crc kubenswrapper[4881]: E0121 11:00:16.512909 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb83e71f8_970c_4afc_ac31_264c7ca6625a.slice/crio-d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.259697 4881 generic.go:334] "Generic (PLEG): container finished" podID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerID="d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac" exitCode=0 Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.260028 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerDied","Data":"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac"} Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.263110 4881 generic.go:334] "Generic (PLEG): container finished" podID="d318e830-067f-4722-9d74-a45fcefc939d" containerID="456438ece135082aa65a1f9d3e1df54da4ad18d3ac41d1e2ac75d98b61443cef" exitCode=0 Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.263172 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerDied","Data":"456438ece135082aa65a1f9d3e1df54da4ad18d3ac41d1e2ac75d98b61443cef"} Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.845714 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.923061 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:00:18 crc kubenswrapper[4881]: I0121 11:00:18.272674 4881 generic.go:334] "Generic (PLEG): container finished" podID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerID="8c58e8e6d9f4309fce56e3b043abdb46d3d4af579c4a6d9ae43870620be9634e" exitCode=0 Jan 21 11:00:18 crc kubenswrapper[4881]: I0121 11:00:18.272745 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerDied","Data":"8c58e8e6d9f4309fce56e3b043abdb46d3d4af579c4a6d9ae43870620be9634e"} Jan 21 11:00:19 crc kubenswrapper[4881]: I0121 11:00:19.392496 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-whh46"] Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.325629 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerStarted","Data":"7e5f304bc82a020e253bc1850121534b947e1ce59d3cde3e998cffd1481389a2"} Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.790386 4881 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.790822 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c09a3a-6389-443c-888b-fe83557dd508" containerName="collect-profiles" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.790844 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c09a3a-6389-443c-888b-fe83557dd508" containerName="collect-profiles" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.791106 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c09a3a-6389-443c-888b-fe83557dd508" containerName="collect-profiles" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.791659 4881 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.791863 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792086 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534" gracePeriod=15 Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792289 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766" gracePeriod=15 Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792408 4881 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792396 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2" gracePeriod=15 Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792541 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f" gracePeriod=15 Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.792641 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792864 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.792902 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792914 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792642 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d" gracePeriod=15 Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.792934 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793305 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.793350 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793373 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.793397 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793404 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.793416 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793423 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.793508 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793516 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793849 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793863 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793874 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793882 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793890 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793901 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793910 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.794069 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.794080 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.799368 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926372 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926437 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926476 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926503 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926521 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926554 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926593 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926630 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028694 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028831 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028874 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028900 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028931 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028958 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028974 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028974 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029100 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029148 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029181 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029195 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029210 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: E0121 11:00:26.449710 4881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.4:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-89m75.188cb9f7888c87eb openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-89m75,UID:075db786-6ad0-4982-b70e-bd05d4f240ec,APIVersion:v1,ResourceVersion:28589,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 11:00:26.448734187 +0000 UTC m=+213.708690656,LastTimestamp:2026-01-21 11:00:26.448734187 +0000 UTC m=+213.708690656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.344109 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerStarted","Data":"e42581773a8d4ea1772dd60eaf9071bf2de0cdd39b8e134e5ac5a682d95b642f"} Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.347186 4881 generic.go:334] "Generic (PLEG): container finished" podID="41bc4c78-71b2-4ca1-b593-410715cb877b" containerID="891a9148acd513d44e13545e811cd63c09e7d52344359f98044e5a82a847b9a1" exitCode=0 Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.347303 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41bc4c78-71b2-4ca1-b593-410715cb877b","Type":"ContainerDied","Data":"891a9148acd513d44e13545e811cd63c09e7d52344359f98044e5a82a847b9a1"} Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.347754 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.348736 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.348975 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.353274 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerStarted","Data":"d4c87b729f18eaf9f12531e5147374286d6a7a44e910d96df5b3275a242bc490"} Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.354166 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.355461 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.355661 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.371110 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerStarted","Data":"0e3e6281eef028f6cd4f512b5ed4a48f81805bf0232c271e4efbf06a7853a75b"} Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.372372 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.372760 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.373119 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.373432 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.373875 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.376080 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.377072 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766" exitCode=0 Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.377121 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f" exitCode=0 Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.377132 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2" exitCode=0 Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.377142 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d" exitCode=2 Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.377185 4881 scope.go:117] "RemoveContainer" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.378227 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.378652 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.379086 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.379382 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.379597 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.816725 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.817193 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.864223 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.865383 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.866167 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.867112 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.867901 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.868338 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.747435 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.748621 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.748826 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.749012 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.749175 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.749336 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.918718 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock\") pod \"41bc4c78-71b2-4ca1-b593-410715cb877b\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.918893 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access\") pod \"41bc4c78-71b2-4ca1-b593-410715cb877b\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.919081 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir\") pod \"41bc4c78-71b2-4ca1-b593-410715cb877b\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.919456 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "41bc4c78-71b2-4ca1-b593-410715cb877b" (UID: "41bc4c78-71b2-4ca1-b593-410715cb877b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.919502 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock" (OuterVolumeSpecName: "var-lock") pod "41bc4c78-71b2-4ca1-b593-410715cb877b" (UID: "41bc4c78-71b2-4ca1-b593-410715cb877b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.928034 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "41bc4c78-71b2-4ca1-b593-410715cb877b" (UID: "41bc4c78-71b2-4ca1-b593-410715cb877b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.020305 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.020339 4881 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.020349 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.298012 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.298385 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.298806 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.299291 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.299574 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.299611 4881 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.299906 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="200ms" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.410706 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.410703 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41bc4c78-71b2-4ca1-b593-410715cb877b","Type":"ContainerDied","Data":"442a9d2a13a72bc50a93b9b5088365fc2ff7f17c8a181731060f8bf93fd639fd"} Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.411394 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="442a9d2a13a72bc50a93b9b5088365fc2ff7f17c8a181731060f8bf93fd639fd" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.416589 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.416997 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.418154 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.418875 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.419278 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.500941 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="400ms" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.692882 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.692931 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.741039 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.742055 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.742465 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.742711 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.742963 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.743225 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.850929 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.851020 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.851096 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.852541 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.852686 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d" gracePeriod=600 Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.902218 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="800ms" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.077852 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.077941 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.129192 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.129972 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.130371 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.130718 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.131073 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.131320 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.421855 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.423123 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534" exitCode=0 Jan 21 11:00:30 crc kubenswrapper[4881]: E0121 11:00:30.703300 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="1.6s" Jan 21 11:00:30 crc kubenswrapper[4881]: E0121 11:00:30.830532 4881 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.831575 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.432852 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d" exitCode=0 Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.432917 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d"} Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.900483 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.901371 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.902059 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.902317 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.902542 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.902712 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.902925 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.903155 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.991547 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.991620 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.991688 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.991991 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.992022 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.992037 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.092778 4881 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.092826 4881 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.092839 4881 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:32 crc kubenswrapper[4881]: E0121 11:00:32.304718 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="3.2s" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.442799 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.444947 4881 scope.go:117] "RemoveContainer" containerID="0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.445041 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.446060 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.446648 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.446966 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.447209 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.447517 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.447985 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.462806 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.463123 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.463398 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.463760 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.464030 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.464284 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.898198 4881 scope.go:117] "RemoveContainer" containerID="b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.921672 4881 scope.go:117] "RemoveContainer" containerID="7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.972078 4881 scope.go:117] "RemoveContainer" containerID="7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d" Jan 21 11:00:33 crc kubenswrapper[4881]: W0121 11:00:33.010602 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-3c3fd17463002dc60f1b6915dc610512a2be8006f920a2d721e7c6794a61be97 WatchSource:0}: Error finding container 3c3fd17463002dc60f1b6915dc610512a2be8006f920a2d721e7c6794a61be97: Status 404 returned error can't find the container with id 3c3fd17463002dc60f1b6915dc610512a2be8006f920a2d721e7c6794a61be97 Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.083224 4881 scope.go:117] "RemoveContainer" containerID="945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.110050 4881 scope.go:117] "RemoveContainer" containerID="164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.314445 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.315465 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.316387 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.316972 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.317375 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.317845 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.326935 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.453062 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.453118 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3c3fd17463002dc60f1b6915dc610512a2be8006f920a2d721e7c6794a61be97"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.454254 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: E0121 11:00:33.454293 4881 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.454488 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.454768 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.455184 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.455612 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerStarted","Data":"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.455651 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.456370 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.456633 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.456918 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.457127 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.457549 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.457972 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.460035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.461301 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.461470 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.461638 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.461838 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.462097 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.462385 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.462631 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.464187 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerStarted","Data":"ea62c10cfd248c0ef9c6d0347f5a3b0a2b7e8d1e35c546c01d7fdadf484cb508"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.465160 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.465550 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.465818 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.466024 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.466304 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.466655 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.467049 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.467315 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.468209 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerStarted","Data":"c77f2373cbe2c6efce94e010b4a6e7c282b2ba984b2b3fef90734b6c51cc06d7"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.468884 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.469184 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.469453 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.469695 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.469952 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.470331 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.470538 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.470691 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.470870 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: E0121 11:00:33.848040 4881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.4:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-89m75.188cb9f7888c87eb openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-89m75,UID:075db786-6ad0-4982-b70e-bd05d4f240ec,APIVersion:v1,ResourceVersion:28589,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 11:00:26.448734187 +0000 UTC m=+213.708690656,LastTimestamp:2026-01-21 11:00:26.448734187 +0000 UTC m=+213.708690656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 11:00:35 crc kubenswrapper[4881]: E0121 11:00:35.507877 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="6.4s" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.196275 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.196339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.276091 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.276949 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.277541 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.278012 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.278382 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.278837 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.279237 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.279849 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.280337 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.280838 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.571332 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.571988 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.572635 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.573389 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.573711 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.574079 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.574403 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.574715 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.575086 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.575427 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.728388 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.728466 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.779838 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.780508 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.780998 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.781339 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.781648 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.781958 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.782326 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.782741 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.783085 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.783339 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.856908 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.857400 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.857678 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.858037 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.858325 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.858616 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.858882 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.859132 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.859399 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.859640 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.536974 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.537036 4881 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e" exitCode=1 Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.537141 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e"} Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.537907 4881 scope.go:117] "RemoveContainer" containerID="d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.538635 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.540182 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.540737 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.541163 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.542045 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.542228 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.542395 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.542559 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.546009 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.546630 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.604580 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.605559 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.605981 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.606150 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.606305 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.606464 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.606735 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.606916 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.607083 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.607248 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.607411 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.854207 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.547961 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.548900 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ff33174746d19460aab25278d732a07a6255013c7f12e5755802d92014fc940a"} Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.550404 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.550935 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.551455 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.551984 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.552215 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.552486 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.552825 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.553227 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.553644 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.553979 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.741037 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.742289 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.742875 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.743415 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.744087 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.744623 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.744994 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.745436 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.745753 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.746123 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.746558 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.139259 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.140165 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.140976 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.141356 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.141879 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.142265 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.142646 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.143151 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.143518 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.143883 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.144426 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.337096 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.338347 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.339082 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.339619 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.339955 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.340456 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.341079 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.341490 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.341914 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.342347 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.342679 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.355226 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.355289 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:40 crc kubenswrapper[4881]: E0121 11:00:40.355930 4881 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.356669 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.361766 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.362205 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:00:40 crc kubenswrapper[4881]: W0121 11:00:40.388658 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-2d2396e8c25911397601513a07678c8c6371d5854b6a02b5782353dc2e1e3ef8 WatchSource:0}: Error finding container 2d2396e8c25911397601513a07678c8c6371d5854b6a02b5782353dc2e1e3ef8: Status 404 returned error can't find the container with id 2d2396e8c25911397601513a07678c8c6371d5854b6a02b5782353dc2e1e3ef8 Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.416897 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.417831 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.418611 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.419298 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.419687 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.420035 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.420353 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.420635 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.420963 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.421212 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.421559 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.556612 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2d2396e8c25911397601513a07678c8c6371d5854b6a02b5782353dc2e1e3ef8"} Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.614534 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.616096 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.616833 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.617448 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.617839 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.618182 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.618535 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.618943 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.619271 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.619511 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.619836 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.773285 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.773342 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.815260 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.815814 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.816075 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.816329 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.816586 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.816825 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.817054 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.817236 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.817390 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.817541 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.817734 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.234357 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.238842 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.239412 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.239865 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.240554 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.241105 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.241426 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.241860 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.242373 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.242904 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.243369 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.243621 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.566423 4881 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4cc224efcd44cd97aee734ee43bb83e308c8aa758eb86919b437e9cb332377ca" exitCode=0 Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.566562 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4cc224efcd44cd97aee734ee43bb83e308c8aa758eb86919b437e9cb332377ca"} Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.566824 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.567165 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.567514 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.567801 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.567935 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: E0121 11:00:41.567957 4881 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.568238 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.568688 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.570388 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.570811 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.571147 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.571483 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.573267 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.573949 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.621946 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.622892 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.623188 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.623420 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.623935 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.624396 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.624585 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.625088 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.625778 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.626112 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.626362 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: E0121 11:00:41.909569 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="7s" Jan 21 11:00:42 crc kubenswrapper[4881]: I0121 11:00:42.637312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3d6e82a5b7cad5bf1a2142628cbfd847c7527dc87d02df8c818b477e8186e80c"} Jan 21 11:00:42 crc kubenswrapper[4881]: I0121 11:00:42.637853 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fa8c66424805081402cbf09b76ebe7eb1b727c9472e19926a74b709f32df256c"} Jan 21 11:00:42 crc kubenswrapper[4881]: I0121 11:00:42.637869 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f88eb7e3a7828df105488abc11b051b98ec4a3a8ce36cafb8ea569b3d9737c7c"} Jan 21 11:00:43 crc kubenswrapper[4881]: I0121 11:00:43.647766 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"10b322b02ba88fb1e74f4c96ac00898962f9b10f9ead20dac706f7e28969eb29"} Jan 21 11:00:43 crc kubenswrapper[4881]: I0121 11:00:43.648323 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:43 crc kubenswrapper[4881]: I0121 11:00:43.648340 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5adc74d07f5fb13d0f83706dc0ab5eff934c025860980485fb0100f977921a27"} Jan 21 11:00:43 crc kubenswrapper[4881]: I0121 11:00:43.648164 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:43 crc kubenswrapper[4881]: I0121 11:00:43.648369 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:44 crc kubenswrapper[4881]: I0121 11:00:44.463937 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" containerID="cri-o://35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe" gracePeriod=15 Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.357913 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.358928 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.363997 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.368387 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513237 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513395 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513479 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513522 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513560 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513598 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbqhc\" (UniqueName: \"kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513655 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513720 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513754 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513814 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513933 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513965 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.514422 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.514510 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.514975 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.515543 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.516136 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.522286 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.523963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.524611 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc" (OuterVolumeSpecName: "kube-api-access-lbqhc") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "kube-api-access-lbqhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.524617 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.525158 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.525546 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.525728 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.526335 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.527459 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615506 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615585 4881 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615605 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615626 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbqhc\" (UniqueName: \"kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615641 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615662 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615675 4881 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615692 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615715 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615736 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615754 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615772 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615807 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615822 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.664484 4881 generic.go:334] "Generic (PLEG): container finished" podID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerID="35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe" exitCode=0 Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.670863 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.671141 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" event={"ID":"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad","Type":"ContainerDied","Data":"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe"} Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.674582 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" event={"ID":"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad","Type":"ContainerDied","Data":"216606908c8b27d34a9f3f57e132945839e5bd3eae4f856f2671c9e8308d7423"} Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.674670 4881 scope.go:117] "RemoveContainer" containerID="35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.704637 4881 scope.go:117] "RemoveContainer" containerID="35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe" Jan 21 11:00:45 crc kubenswrapper[4881]: E0121 11:00:45.707748 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe\": container with ID starting with 35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe not found: ID does not exist" containerID="35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.707848 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe"} err="failed to get container status \"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe\": rpc error: code = NotFound desc = could not find container \"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe\": container with ID starting with 35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe not found: ID does not exist" Jan 21 11:00:48 crc kubenswrapper[4881]: I0121 11:00:48.672452 4881 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:48 crc kubenswrapper[4881]: I0121 11:00:48.763632 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5a430d8b-9d5e-41d8-a702-5042d4c683ad" Jan 21 11:00:49 crc kubenswrapper[4881]: E0121 11:00:49.185969 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 21 11:00:49 crc kubenswrapper[4881]: I0121 11:00:49.701965 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:49 crc kubenswrapper[4881]: I0121 11:00:49.702049 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:49 crc kubenswrapper[4881]: I0121 11:00:49.706691 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:49 crc kubenswrapper[4881]: I0121 11:00:49.707272 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5a430d8b-9d5e-41d8-a702-5042d4c683ad" Jan 21 11:00:49 crc kubenswrapper[4881]: E0121 11:00:49.818441 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 21 11:00:50 crc kubenswrapper[4881]: I0121 11:00:50.710563 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:50 crc kubenswrapper[4881]: I0121 11:00:50.710612 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:50 crc kubenswrapper[4881]: I0121 11:00:50.714191 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5a430d8b-9d5e-41d8-a702-5042d4c683ad" Jan 21 11:00:56 crc kubenswrapper[4881]: I0121 11:00:56.496352 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 11:00:58 crc kubenswrapper[4881]: I0121 11:00:58.376454 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 11:00:58 crc kubenswrapper[4881]: I0121 11:00:58.606859 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 11:00:58 crc kubenswrapper[4881]: I0121 11:00:58.655061 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 11:00:58 crc kubenswrapper[4881]: I0121 11:00:58.822743 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 11:00:59 crc kubenswrapper[4881]: I0121 11:00:59.366971 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 11:00:59 crc kubenswrapper[4881]: I0121 11:00:59.630573 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 11:00:59 crc kubenswrapper[4881]: I0121 11:00:59.748969 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 11:01:00 crc kubenswrapper[4881]: I0121 11:01:00.361505 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 11:01:00 crc kubenswrapper[4881]: I0121 11:01:00.464229 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 11:01:00 crc kubenswrapper[4881]: I0121 11:01:00.571330 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 11:01:00 crc kubenswrapper[4881]: I0121 11:01:00.706217 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 11:01:00 crc kubenswrapper[4881]: I0121 11:01:00.882004 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.025830 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.137881 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.155196 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.279252 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.292128 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.296176 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.345988 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.475355 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.475457 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.481359 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.629684 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.862393 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.875669 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.963145 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.977859 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.009559 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.064859 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.175236 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.302502 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.403824 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.486660 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.547124 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.551911 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.584738 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.589340 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.605121 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.617502 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.671378 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.868344 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.894553 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.982839 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.007506 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.017908 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.018426 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.022211 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.025265 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.096639 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.104580 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.128332 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.174121 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.297669 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.306138 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.348512 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.351947 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.396298 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.493011 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.585354 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.594190 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.745181 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.794889 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.841750 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.848830 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.853639 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.898320 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.911224 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.972748 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.989561 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.000462 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.086565 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.109369 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.137785 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.266015 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.285025 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.316290 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.451909 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.577697 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.826202 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.839442 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.041579 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.232341 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.245602 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.275814 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.287574 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.354039 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.394674 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.529220 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.578063 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.580897 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.623566 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.656215 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.702568 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.747388 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.925558 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.120801 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.162739 4881 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.341875 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.348644 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.614154 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.750446 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.837432 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.850753 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.917689 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.952377 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.018906 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.070870 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.092999 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.373527 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.431497 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.435656 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.476487 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.510807 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.526967 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.539998 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.779632 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.814869 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.842977 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.902865 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.920331 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.961992 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.018469 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.018859 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.082652 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.111707 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.132269 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.139415 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.186391 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.204083 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.235604 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.235611 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.291692 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.370087 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.399662 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.422738 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.472672 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.557236 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.625635 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.641036 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.652754 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.766631 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.819275 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.821558 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.912049 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.922548 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.925082 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.961603 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.969902 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.060121 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.137268 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.168534 4881 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.195767 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.358774 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.366018 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.566122 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.575562 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.583330 4881 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.602636 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.643033 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.661623 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.690484 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.746172 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.858878 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.874628 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.003063 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.011642 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.070827 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.213276 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.229574 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.269981 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.363227 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.371861 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.372052 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.471883 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.479208 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.528254 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.571044 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.590883 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.678810 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.759434 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.759954 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.969368 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.990956 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.993995 4881 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.035635 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.112563 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.290575 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.292828 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.298879 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.316188 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.367468 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.422315 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.589922 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.598055 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.601640 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.606397 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.624185 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.668410 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.819238 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.012143 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.061226 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.129886 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.131761 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.135972 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.174895 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.542659 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.621234 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.762617 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.776595 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.869447 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.891733 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.960021 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.017479 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.021128 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.023123 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.108171 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.108350 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.231471 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.248134 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.432140 4881 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.561594 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.647029 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.822527 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.829124 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.967964 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.990552 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.048132 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.073550 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.233009 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.315468 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.489461 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.696441 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 11:01:31 crc kubenswrapper[4881]: I0121 11:01:31.653476 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 11:01:43 crc kubenswrapper[4881]: I0121 11:01:43.771568 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 11:01:47 crc kubenswrapper[4881]: I0121 11:01:47.701576 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 11:01:47 crc kubenswrapper[4881]: I0121 11:01:47.760340 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.865473 4881 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.866881 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kfmhs" podStartSLOduration=87.291891007 podStartE2EDuration="3m2.866859437s" podCreationTimestamp="2026-01-21 10:58:50 +0000 UTC" firstStartedPulling="2026-01-21 10:58:52.687157552 +0000 UTC m=+119.947114021" lastFinishedPulling="2026-01-21 11:00:28.262125982 +0000 UTC m=+215.522082451" observedRunningTime="2026-01-21 11:00:48.613318464 +0000 UTC m=+235.873274933" watchObservedRunningTime="2026-01-21 11:01:52.866859437 +0000 UTC m=+300.126815896" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.867686 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q6dn5" podStartSLOduration=92.223133184 podStartE2EDuration="3m6.867677489s" podCreationTimestamp="2026-01-21 10:58:46 +0000 UTC" firstStartedPulling="2026-01-21 10:58:49.413397935 +0000 UTC m=+116.673354404" lastFinishedPulling="2026-01-21 11:00:24.05794224 +0000 UTC m=+211.317898709" observedRunningTime="2026-01-21 11:00:48.676916006 +0000 UTC m=+235.936872475" watchObservedRunningTime="2026-01-21 11:01:52.867677489 +0000 UTC m=+300.127633958" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.868474 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vljfh" podStartSLOduration=97.578040704 podStartE2EDuration="3m3.868466749s" podCreationTimestamp="2026-01-21 10:58:49 +0000 UTC" firstStartedPulling="2026-01-21 10:58:51.485309684 +0000 UTC m=+118.745266153" lastFinishedPulling="2026-01-21 11:00:17.775735729 +0000 UTC m=+205.035692198" observedRunningTime="2026-01-21 11:00:48.587840335 +0000 UTC m=+235.847796814" watchObservedRunningTime="2026-01-21 11:01:52.868466749 +0000 UTC m=+300.128423218" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.868586 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t4zlb" podStartSLOduration=82.643953818 podStartE2EDuration="3m2.868580632s" podCreationTimestamp="2026-01-21 10:58:50 +0000 UTC" firstStartedPulling="2026-01-21 10:58:52.673688132 +0000 UTC m=+119.933644601" lastFinishedPulling="2026-01-21 11:00:32.898314946 +0000 UTC m=+220.158271415" observedRunningTime="2026-01-21 11:00:48.725181764 +0000 UTC m=+235.985138243" watchObservedRunningTime="2026-01-21 11:01:52.868580632 +0000 UTC m=+300.128537101" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.868931 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-89m75" podStartSLOduration=93.325736883 podStartE2EDuration="3m4.8689248s" podCreationTimestamp="2026-01-21 10:58:48 +0000 UTC" firstStartedPulling="2026-01-21 10:58:51.645049337 +0000 UTC m=+118.905005806" lastFinishedPulling="2026-01-21 11:00:23.188237254 +0000 UTC m=+210.448193723" observedRunningTime="2026-01-21 11:00:48.545141203 +0000 UTC m=+235.805097682" watchObservedRunningTime="2026-01-21 11:01:52.8689248 +0000 UTC m=+300.128881279" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.870842 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2sqlm" podStartSLOduration=82.398900067 podStartE2EDuration="3m5.870836471s" podCreationTimestamp="2026-01-21 10:58:47 +0000 UTC" firstStartedPulling="2026-01-21 10:58:49.409825088 +0000 UTC m=+116.669781557" lastFinishedPulling="2026-01-21 11:00:32.881761452 +0000 UTC m=+220.141717961" observedRunningTime="2026-01-21 11:00:48.70143362 +0000 UTC m=+235.961390099" watchObservedRunningTime="2026-01-21 11:01:52.870836471 +0000 UTC m=+300.130792940" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.871280 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6rmvm" podStartSLOduration=92.069366057 podStartE2EDuration="3m5.871276662s" podCreationTimestamp="2026-01-21 10:58:47 +0000 UTC" firstStartedPulling="2026-01-21 10:58:49.387616253 +0000 UTC m=+116.647572722" lastFinishedPulling="2026-01-21 11:00:23.189526858 +0000 UTC m=+210.449483327" observedRunningTime="2026-01-21 11:00:48.759701452 +0000 UTC m=+236.019657921" watchObservedRunningTime="2026-01-21 11:01:52.871276662 +0000 UTC m=+300.131233131" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872138 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-whh46","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872203 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-apiserver/kube-apiserver-startup-monitor-crc","openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8"] Jan 21 11:01:52 crc kubenswrapper[4881]: E0121 11:01:52.872556 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" containerName="installer" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872576 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" containerName="installer" Jan 21 11:01:52 crc kubenswrapper[4881]: E0121 11:01:52.872594 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872601 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872746 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" containerName="installer" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872759 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.873193 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8","openshift-controller-manager/controller-manager-879f6c89f-wjlxh","openshift-marketplace/certified-operators-2sqlm","openshift-marketplace/community-operators-6rmvm"] Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.873438 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6rmvm" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="registry-server" containerID="cri-o://7e5f304bc82a020e253bc1850121534b947e1ce59d3cde3e998cffd1481389a2" gracePeriod=2 Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.873996 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2sqlm" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="registry-server" containerID="cri-o://c77f2373cbe2c6efce94e010b4a6e7c282b2ba984b2b3fef90734b6c51cc06d7" gracePeriod=2 Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.874106 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.875799 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" podUID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" containerName="route-controller-manager" containerID="cri-o://9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111" gracePeriod=30 Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.876101 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerName="controller-manager" containerID="cri-o://6b8fc2aac0518f9de92cee69b4b59a05f08ed2161c480a5655d85171be0e5a8b" gracePeriod=30 Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.877544 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.885636 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.885969 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.885996 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.886612 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.888524 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.890122 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.890334 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.890611 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.890996 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.891513 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.898280 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.898693 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.899104 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.901716 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.928895 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.933632 4881 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934529 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwwx7\" (UniqueName: \"kubernetes.io/projected/beca3a20-cc8d-4051-80e4-abefdc51ade5-kube-api-access-kwwx7\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934601 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-error\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934638 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-session\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934676 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934712 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934736 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934774 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-dir\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934821 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-login\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934862 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934896 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934927 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-policies\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934964 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934993 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.935118 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.959013 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=64.958993777 podStartE2EDuration="1m4.958993777s" podCreationTimestamp="2026-01-21 11:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:52.956173264 +0000 UTC m=+300.216129743" watchObservedRunningTime="2026-01-21 11:01:52.958993777 +0000 UTC m=+300.218950246" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.983500 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=6.983475351 podStartE2EDuration="6.983475351s" podCreationTimestamp="2026-01-21 11:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:52.981336576 +0000 UTC m=+300.241293045" watchObservedRunningTime="2026-01-21 11:01:52.983475351 +0000 UTC m=+300.243431820" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.038581 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.038653 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.038687 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-policies\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.038734 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.038764 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.040959 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-policies\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.041991 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042092 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwwx7\" (UniqueName: \"kubernetes.io/projected/beca3a20-cc8d-4051-80e4-abefdc51ade5-kube-api-access-kwwx7\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042147 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-error\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-session\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042284 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042351 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042382 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042452 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-login\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042483 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-dir\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042644 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-dir\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042883 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.050934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.052343 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.059835 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-error\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.061820 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.065018 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-session\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.067372 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.069596 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwwx7\" (UniqueName: \"kubernetes.io/projected/beca3a20-cc8d-4051-80e4-abefdc51ade5-kube-api-access-kwwx7\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.075225 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.078012 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.082751 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.084826 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-login\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.143977 4881 generic.go:334] "Generic (PLEG): container finished" podID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerID="7e5f304bc82a020e253bc1850121534b947e1ce59d3cde3e998cffd1481389a2" exitCode=0 Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.144121 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerDied","Data":"7e5f304bc82a020e253bc1850121534b947e1ce59d3cde3e998cffd1481389a2"} Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.154756 4881 generic.go:334] "Generic (PLEG): container finished" podID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerID="c77f2373cbe2c6efce94e010b4a6e7c282b2ba984b2b3fef90734b6c51cc06d7" exitCode=0 Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.154901 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerDied","Data":"c77f2373cbe2c6efce94e010b4a6e7c282b2ba984b2b3fef90734b6c51cc06d7"} Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.156904 4881 generic.go:334] "Generic (PLEG): container finished" podID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" containerID="9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111" exitCode=0 Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.156963 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" event={"ID":"706c6a3b-823b-4ea3-b7a8-e20d571d3ace","Type":"ContainerDied","Data":"9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111"} Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.161623 4881 generic.go:334] "Generic (PLEG): container finished" podID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerID="6b8fc2aac0518f9de92cee69b4b59a05f08ed2161c480a5655d85171be0e5a8b" exitCode=0 Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.161737 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" event={"ID":"002a39eb-e2e0-4d3e-8f61-89a539a653a9","Type":"ContainerDied","Data":"6b8fc2aac0518f9de92cee69b4b59a05f08ed2161c480a5655d85171be0e5a8b"} Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.233723 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.242260 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.323252 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" path="/var/lib/kubelet/pods/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad/volumes" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.326738 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.396190 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:53 crc kubenswrapper[4881]: E0121 11:01:53.397301 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerName="controller-manager" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.397330 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerName="controller-manager" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.397702 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerName="controller-manager" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.398921 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.401070 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.437956 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.462818 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn8zf\" (UniqueName: \"kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf\") pod \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.462993 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles\") pod \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463054 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert\") pod \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463116 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca\") pod \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463163 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config\") pod \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463405 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463450 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463508 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463578 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4kr9\" (UniqueName: \"kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.469379 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "002a39eb-e2e0-4d3e-8f61-89a539a653a9" (UID: "002a39eb-e2e0-4d3e-8f61-89a539a653a9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.470434 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config" (OuterVolumeSpecName: "config") pod "002a39eb-e2e0-4d3e-8f61-89a539a653a9" (UID: "002a39eb-e2e0-4d3e-8f61-89a539a653a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.473582 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "002a39eb-e2e0-4d3e-8f61-89a539a653a9" (UID: "002a39eb-e2e0-4d3e-8f61-89a539a653a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.476655 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "002a39eb-e2e0-4d3e-8f61-89a539a653a9" (UID: "002a39eb-e2e0-4d3e-8f61-89a539a653a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.477425 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf" (OuterVolumeSpecName: "kube-api-access-vn8zf") pod "002a39eb-e2e0-4d3e-8f61-89a539a653a9" (UID: "002a39eb-e2e0-4d3e-8f61-89a539a653a9"). InnerVolumeSpecName "kube-api-access-vn8zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.514612 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.523109 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565558 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config\") pod \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565648 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrsm4\" (UniqueName: \"kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4\") pod \"5b12596d-1f5f-4d81-b664-d0ddee72552c\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565724 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities\") pod \"5b12596d-1f5f-4d81-b664-d0ddee72552c\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565877 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca\") pod \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565908 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2dkc\" (UniqueName: \"kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc\") pod \"2c460bf5-05a1-4977-b889-1a5c3263df33\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565976 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert\") pod \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566021 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content\") pod \"5b12596d-1f5f-4d81-b664-d0ddee72552c\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566058 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities\") pod \"2c460bf5-05a1-4977-b889-1a5c3263df33\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566157 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content\") pod \"2c460bf5-05a1-4977-b889-1a5c3263df33\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566194 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kgjc\" (UniqueName: \"kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc\") pod \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566469 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566564 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566590 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4kr9\" (UniqueName: \"kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566619 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566645 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566692 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn8zf\" (UniqueName: \"kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566704 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566713 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566723 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566733 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.568747 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.569876 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities" (OuterVolumeSpecName: "utilities") pod "2c460bf5-05a1-4977-b889-1a5c3263df33" (UID: "2c460bf5-05a1-4977-b889-1a5c3263df33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.571018 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities" (OuterVolumeSpecName: "utilities") pod "5b12596d-1f5f-4d81-b664-d0ddee72552c" (UID: "5b12596d-1f5f-4d81-b664-d0ddee72552c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.572729 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config" (OuterVolumeSpecName: "config") pod "706c6a3b-823b-4ea3-b7a8-e20d571d3ace" (UID: "706c6a3b-823b-4ea3-b7a8-e20d571d3ace"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.574404 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.574690 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca" (OuterVolumeSpecName: "client-ca") pod "706c6a3b-823b-4ea3-b7a8-e20d571d3ace" (UID: "706c6a3b-823b-4ea3-b7a8-e20d571d3ace"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.578508 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.579122 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.579273 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4" (OuterVolumeSpecName: "kube-api-access-lrsm4") pod "5b12596d-1f5f-4d81-b664-d0ddee72552c" (UID: "5b12596d-1f5f-4d81-b664-d0ddee72552c"). InnerVolumeSpecName "kube-api-access-lrsm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.581644 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc" (OuterVolumeSpecName: "kube-api-access-p2dkc") pod "2c460bf5-05a1-4977-b889-1a5c3263df33" (UID: "2c460bf5-05a1-4977-b889-1a5c3263df33"). InnerVolumeSpecName "kube-api-access-p2dkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.582881 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc" (OuterVolumeSpecName: "kube-api-access-9kgjc") pod "706c6a3b-823b-4ea3-b7a8-e20d571d3ace" (UID: "706c6a3b-823b-4ea3-b7a8-e20d571d3ace"). InnerVolumeSpecName "kube-api-access-9kgjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.584949 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "706c6a3b-823b-4ea3-b7a8-e20d571d3ace" (UID: "706c6a3b-823b-4ea3-b7a8-e20d571d3ace"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.595732 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4kr9\" (UniqueName: \"kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.625656 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b12596d-1f5f-4d81-b664-d0ddee72552c" (UID: "5b12596d-1f5f-4d81-b664-d0ddee72552c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.642629 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c460bf5-05a1-4977-b889-1a5c3263df33" (UID: "2c460bf5-05a1-4977-b889-1a5c3263df33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668640 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668677 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kgjc\" (UniqueName: \"kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668692 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668705 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrsm4\" (UniqueName: \"kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668715 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668724 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668734 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2dkc\" (UniqueName: \"kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668744 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668752 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668761 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.727757 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.827699 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8"] Jan 21 11:01:53 crc kubenswrapper[4881]: W0121 11:01:53.835845 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeca3a20_cc8d_4051_80e4_abefdc51ade5.slice/crio-a056711b9c51d593aca8331517f6165e9d28333e5d223c19de2b24f717912a83 WatchSource:0}: Error finding container a056711b9c51d593aca8331517f6165e9d28333e5d223c19de2b24f717912a83: Status 404 returned error can't find the container with id a056711b9c51d593aca8331517f6165e9d28333e5d223c19de2b24f717912a83 Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.881367 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.881834 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vljfh" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="registry-server" containerID="cri-o://0e3e6281eef028f6cd4f512b5ed4a48f81805bf0232c271e4efbf06a7853a75b" gracePeriod=2 Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.071403 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.072827 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t4zlb" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="registry-server" containerID="cri-o://7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461" gracePeriod=2 Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.076613 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.186808 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.186829 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" event={"ID":"706c6a3b-823b-4ea3-b7a8-e20d571d3ace","Type":"ContainerDied","Data":"22d022e22752b1a845c64ff7297933c2f9f91e223d3640540e2ab737fe1ace78"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.186907 4881 scope.go:117] "RemoveContainer" containerID="9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.195036 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.195977 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" event={"ID":"002a39eb-e2e0-4d3e-8f61-89a539a653a9","Type":"ContainerDied","Data":"fec206b72c4648e66af3adcacd7cb5106e2766bcb34d529fae1cd757bd777535"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.206076 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" event={"ID":"beca3a20-cc8d-4051-80e4-abefdc51ade5","Type":"ContainerStarted","Data":"a056711b9c51d593aca8331517f6165e9d28333e5d223c19de2b24f717912a83"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.214462 4881 generic.go:334] "Generic (PLEG): container finished" podID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerID="0e3e6281eef028f6cd4f512b5ed4a48f81805bf0232c271e4efbf06a7853a75b" exitCode=0 Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.214570 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerDied","Data":"0e3e6281eef028f6cd4f512b5ed4a48f81805bf0232c271e4efbf06a7853a75b"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.226167 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerDied","Data":"c3a0b0298aa8ab878f3e521eb0f166ff0e56c334391018119468d1c2b03f0be9"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.226322 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.228472 4881 scope.go:117] "RemoveContainer" containerID="6b8fc2aac0518f9de92cee69b4b59a05f08ed2161c480a5655d85171be0e5a8b" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.234629 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerDied","Data":"06bab0b00f0f71fd0a092b84dfd550234e778896541edbd10dbb4f1a0cb5d5b8"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.234767 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.245544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" event={"ID":"89559857-e73d-4f35-838d-c0b0946939d4","Type":"ContainerStarted","Data":"ebe56607ace74705e145d654d7bc2814291ec5e33259c85e6447339814042d78"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.262353 4881 scope.go:117] "RemoveContainer" containerID="7e5f304bc82a020e253bc1850121534b947e1ce59d3cde3e998cffd1481389a2" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.268480 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wjlxh"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.274417 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wjlxh"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.323624 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.330602 4881 scope.go:117] "RemoveContainer" containerID="db0493653bc30919d4352c24df01a207c2de62ad8f1fa10ff346fcc988a5549e" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.340015 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.348470 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.357014 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6rmvm"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.368027 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6rmvm"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.372235 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2sqlm"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.377395 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2sqlm"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.381687 4881 scope.go:117] "RemoveContainer" containerID="21ab48233ffe1978a9c9e6217e5905832c0304da6f07fa2e19daa5ca75ac0da7" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.385845 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content\") pod \"1d66b837-f7b1-4795-895f-08cdabe48b37\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.386008 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities\") pod \"1d66b837-f7b1-4795-895f-08cdabe48b37\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.386074 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b56ld\" (UniqueName: \"kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld\") pod \"1d66b837-f7b1-4795-895f-08cdabe48b37\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.390847 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities" (OuterVolumeSpecName: "utilities") pod "1d66b837-f7b1-4795-895f-08cdabe48b37" (UID: "1d66b837-f7b1-4795-895f-08cdabe48b37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.402677 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld" (OuterVolumeSpecName: "kube-api-access-b56ld") pod "1d66b837-f7b1-4795-895f-08cdabe48b37" (UID: "1d66b837-f7b1-4795-895f-08cdabe48b37"). InnerVolumeSpecName "kube-api-access-b56ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.412394 4881 scope.go:117] "RemoveContainer" containerID="c77f2373cbe2c6efce94e010b4a6e7c282b2ba984b2b3fef90734b6c51cc06d7" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.422269 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d66b837-f7b1-4795-895f-08cdabe48b37" (UID: "1d66b837-f7b1-4795-895f-08cdabe48b37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.452597 4881 scope.go:117] "RemoveContainer" containerID="8c58e8e6d9f4309fce56e3b043abdb46d3d4af579c4a6d9ae43870620be9634e" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.488585 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.488635 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b56ld\" (UniqueName: \"kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.488651 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.501080 4881 scope.go:117] "RemoveContainer" containerID="5aed93291404e255299931c1a9f3a011b1cb4d3b3ce796db1f1b3e7ec12c142e" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.515278 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.593179 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn5jn\" (UniqueName: \"kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn\") pod \"b83e71f8-970c-4afc-ac31-264c7ca6625a\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.593336 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities\") pod \"b83e71f8-970c-4afc-ac31-264c7ca6625a\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.593475 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content\") pod \"b83e71f8-970c-4afc-ac31-264c7ca6625a\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.594354 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities" (OuterVolumeSpecName: "utilities") pod "b83e71f8-970c-4afc-ac31-264c7ca6625a" (UID: "b83e71f8-970c-4afc-ac31-264c7ca6625a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.598753 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn" (OuterVolumeSpecName: "kube-api-access-sn5jn") pod "b83e71f8-970c-4afc-ac31-264c7ca6625a" (UID: "b83e71f8-970c-4afc-ac31-264c7ca6625a"). InnerVolumeSpecName "kube-api-access-sn5jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.695074 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn5jn\" (UniqueName: \"kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.695103 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.717187 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b83e71f8-970c-4afc-ac31-264c7ca6625a" (UID: "b83e71f8-970c-4afc-ac31-264c7ca6625a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.796696 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.256675 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" event={"ID":"beca3a20-cc8d-4051-80e4-abefdc51ade5","Type":"ContainerStarted","Data":"9d40e077357163b3f00df547a9ac5607b2669655ed19bf3a13296c1d2659a959"} Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.257843 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.262625 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerDied","Data":"eb22a93b2892f0c51c953eb6eb827724775592dd8224db01464d1014b0260e0e"} Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.262688 4881 scope.go:117] "RemoveContainer" containerID="0e3e6281eef028f6cd4f512b5ed4a48f81805bf0232c271e4efbf06a7853a75b" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.262712 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.266148 4881 generic.go:334] "Generic (PLEG): container finished" podID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerID="7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461" exitCode=0 Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.266205 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerDied","Data":"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461"} Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.266229 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerDied","Data":"16d7bf5b9f969471865c2f6c0d0043006c1b79484bd1c97e826d3a03374ea542"} Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.266327 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.275581 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.282351 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" event={"ID":"89559857-e73d-4f35-838d-c0b0946939d4","Type":"ContainerStarted","Data":"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397"} Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.282894 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.288105 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" podStartSLOduration=96.288089021 podStartE2EDuration="1m36.288089021s" podCreationTimestamp="2026-01-21 11:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:55.284373626 +0000 UTC m=+302.544330105" watchObservedRunningTime="2026-01-21 11:01:55.288089021 +0000 UTC m=+302.548045490" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.296566 4881 scope.go:117] "RemoveContainer" containerID="87b3da4f38a8247ed7dbb2b11f2ec14c16c71eee1d17657bf85f241bc0e931f6" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.297906 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.311318 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" podStartSLOduration=19.311302444 podStartE2EDuration="19.311302444s" podCreationTimestamp="2026-01-21 11:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:55.308980733 +0000 UTC m=+302.568937212" watchObservedRunningTime="2026-01-21 11:01:55.311302444 +0000 UTC m=+302.571258913" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.316649 4881 scope.go:117] "RemoveContainer" containerID="ec4a8cdf9092080c2fbbc3ac32eca21f15705f2f8424796b41499693e29b4095" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.325181 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" path="/var/lib/kubelet/pods/002a39eb-e2e0-4d3e-8f61-89a539a653a9/volumes" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.326188 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" path="/var/lib/kubelet/pods/2c460bf5-05a1-4977-b889-1a5c3263df33/volumes" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.327399 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" path="/var/lib/kubelet/pods/5b12596d-1f5f-4d81-b664-d0ddee72552c/volumes" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.328068 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" path="/var/lib/kubelet/pods/706c6a3b-823b-4ea3-b7a8-e20d571d3ace/volumes" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.347090 4881 scope.go:117] "RemoveContainer" containerID="7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.363261 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.366223 4881 scope.go:117] "RemoveContainer" containerID="d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.371353 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.408792 4881 scope.go:117] "RemoveContainer" containerID="ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.412986 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.416848 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.446295 4881 scope.go:117] "RemoveContainer" containerID="7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.446847 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461\": container with ID starting with 7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461 not found: ID does not exist" containerID="7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.446889 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461"} err="failed to get container status \"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461\": rpc error: code = NotFound desc = could not find container \"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461\": container with ID starting with 7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461 not found: ID does not exist" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.446918 4881 scope.go:117] "RemoveContainer" containerID="d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.447313 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac\": container with ID starting with d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac not found: ID does not exist" containerID="d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.447465 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac"} err="failed to get container status \"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac\": rpc error: code = NotFound desc = could not find container \"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac\": container with ID starting with d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac not found: ID does not exist" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.447609 4881 scope.go:117] "RemoveContainer" containerID="ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.448064 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593\": container with ID starting with ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593 not found: ID does not exist" containerID="ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.448094 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593"} err="failed to get container status \"ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593\": rpc error: code = NotFound desc = could not find container \"ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593\": container with ID starting with ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593 not found: ID does not exist" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895517 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895868 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895889 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895904 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895913 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895927 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895936 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895949 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895957 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895970 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895978 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895991 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896001 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896021 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" containerName="route-controller-manager" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896030 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" containerName="route-controller-manager" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896046 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896055 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896070 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896078 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896090 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896100 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896111 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896119 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896131 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896139 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896150 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896158 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896289 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896303 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896315 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896329 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896339 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" containerName="route-controller-manager" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.897006 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.900035 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.900856 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.901172 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.902057 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.902252 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.905852 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.919421 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.021296 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.021662 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.021860 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.021994 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wx6k\" (UniqueName: \"kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.123600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.124295 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.124420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wx6k\" (UniqueName: \"kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.124752 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.125149 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.125805 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.132027 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.145227 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wx6k\" (UniqueName: \"kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.225711 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.486407 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.614270 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.698760 4881 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.699040 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1" gracePeriod=5 Jan 21 11:01:57 crc kubenswrapper[4881]: I0121 11:01:57.327555 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" path="/var/lib/kubelet/pods/1d66b837-f7b1-4795-895f-08cdabe48b37/volumes" Jan 21 11:01:57 crc kubenswrapper[4881]: I0121 11:01:57.328528 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" path="/var/lib/kubelet/pods/b83e71f8-970c-4afc-ac31-264c7ca6625a/volumes" Jan 21 11:01:57 crc kubenswrapper[4881]: I0121 11:01:57.329217 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" event={"ID":"b1ebf4ad-7b0d-4711-93bd-206ec36e7202","Type":"ContainerStarted","Data":"03285c7f75ca0c5ea5fc4bbbace73cfbfd25315c2b430af309cd5af6d0d8503a"} Jan 21 11:01:57 crc kubenswrapper[4881]: I0121 11:01:57.329271 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" event={"ID":"b1ebf4ad-7b0d-4711-93bd-206ec36e7202","Type":"ContainerStarted","Data":"cf1ccaca8e9193a4546c7cd1215ccba45fb7b47029b1d20906ee6e97c1d22afe"} Jan 21 11:01:57 crc kubenswrapper[4881]: I0121 11:01:57.344568 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" podStartSLOduration=21.344539966 podStartE2EDuration="21.344539966s" podCreationTimestamp="2026-01-21 11:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:57.336053146 +0000 UTC m=+304.596009615" watchObservedRunningTime="2026-01-21 11:01:57.344539966 +0000 UTC m=+304.604496435" Jan 21 11:01:58 crc kubenswrapper[4881]: E0121 11:01:58.276882 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.326867 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" podUID="89559857-e73d-4f35-838d-c0b0946939d4" containerName="controller-manager" containerID="cri-o://e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397" gracePeriod=30 Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.327187 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.347281 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.800587 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.837703 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86b9bf4878-kbmxb"] Jan 21 11:01:58 crc kubenswrapper[4881]: E0121 11:01:58.838011 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89559857-e73d-4f35-838d-c0b0946939d4" containerName="controller-manager" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.838029 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="89559857-e73d-4f35-838d-c0b0946939d4" containerName="controller-manager" Jan 21 11:01:58 crc kubenswrapper[4881]: E0121 11:01:58.838046 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.838055 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.838157 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="89559857-e73d-4f35-838d-c0b0946939d4" containerName="controller-manager" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.838172 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.838553 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.851283 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86b9bf4878-kbmxb"] Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.864730 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert\") pod \"89559857-e73d-4f35-838d-c0b0946939d4\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.864772 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4kr9\" (UniqueName: \"kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9\") pod \"89559857-e73d-4f35-838d-c0b0946939d4\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.864848 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles\") pod \"89559857-e73d-4f35-838d-c0b0946939d4\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.864897 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca\") pod \"89559857-e73d-4f35-838d-c0b0946939d4\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.864925 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config\") pod \"89559857-e73d-4f35-838d-c0b0946939d4\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.866137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config" (OuterVolumeSpecName: "config") pod "89559857-e73d-4f35-838d-c0b0946939d4" (UID: "89559857-e73d-4f35-838d-c0b0946939d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.867558 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "89559857-e73d-4f35-838d-c0b0946939d4" (UID: "89559857-e73d-4f35-838d-c0b0946939d4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.870892 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "89559857-e73d-4f35-838d-c0b0946939d4" (UID: "89559857-e73d-4f35-838d-c0b0946939d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.873967 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9" (OuterVolumeSpecName: "kube-api-access-v4kr9") pod "89559857-e73d-4f35-838d-c0b0946939d4" (UID: "89559857-e73d-4f35-838d-c0b0946939d4"). InnerVolumeSpecName "kube-api-access-v4kr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.874197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "89559857-e73d-4f35-838d-c0b0946939d4" (UID: "89559857-e73d-4f35-838d-c0b0946939d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967019 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpspj\" (UniqueName: \"kubernetes.io/projected/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-kube-api-access-wpspj\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967084 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-config\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967144 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-proxy-ca-bundles\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967184 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-client-ca\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-serving-cert\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967398 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967418 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967449 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967462 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4kr9\" (UniqueName: \"kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967471 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.072908 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-serving-cert\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.072993 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpspj\" (UniqueName: \"kubernetes.io/projected/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-kube-api-access-wpspj\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.073018 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-config\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.073047 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-proxy-ca-bundles\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.073077 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-client-ca\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.075413 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-client-ca\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.075706 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-config\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.075970 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-proxy-ca-bundles\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.081359 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-serving-cert\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.090888 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpspj\" (UniqueName: \"kubernetes.io/projected/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-kube-api-access-wpspj\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.183738 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.345392 4881 generic.go:334] "Generic (PLEG): container finished" podID="89559857-e73d-4f35-838d-c0b0946939d4" containerID="e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397" exitCode=0 Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.346258 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.350517 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" event={"ID":"89559857-e73d-4f35-838d-c0b0946939d4","Type":"ContainerDied","Data":"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397"} Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.350571 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" event={"ID":"89559857-e73d-4f35-838d-c0b0946939d4","Type":"ContainerDied","Data":"ebe56607ace74705e145d654d7bc2814291ec5e33259c85e6447339814042d78"} Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.350590 4881 scope.go:117] "RemoveContainer" containerID="e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.397954 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.401559 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.407336 4881 scope.go:117] "RemoveContainer" containerID="e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397" Jan 21 11:01:59 crc kubenswrapper[4881]: E0121 11:01:59.409127 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397\": container with ID starting with e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397 not found: ID does not exist" containerID="e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.409172 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397"} err="failed to get container status \"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397\": rpc error: code = NotFound desc = could not find container \"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397\": container with ID starting with e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397 not found: ID does not exist" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.715742 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86b9bf4878-kbmxb"] Jan 21 11:02:00 crc kubenswrapper[4881]: I0121 11:02:00.355420 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" event={"ID":"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f","Type":"ContainerStarted","Data":"66bf8da974464776256d6a59805c0099cfc6baf199f22bc813539a2a6a44acee"} Jan 21 11:02:00 crc kubenswrapper[4881]: I0121 11:02:00.355996 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" event={"ID":"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f","Type":"ContainerStarted","Data":"332bcaa29cc1493ca4d3b0a99be15366debbc1695857b25b01aa44f8caa14d80"} Jan 21 11:02:00 crc kubenswrapper[4881]: I0121 11:02:00.358527 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:02:00 crc kubenswrapper[4881]: I0121 11:02:00.365939 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:02:00 crc kubenswrapper[4881]: I0121 11:02:00.383527 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" podStartSLOduration=4.3834904 podStartE2EDuration="4.3834904s" podCreationTimestamp="2026-01-21 11:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:02:00.37615035 +0000 UTC m=+307.636106829" watchObservedRunningTime="2026-01-21 11:02:00.3834904 +0000 UTC m=+307.643446879" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.323115 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89559857-e73d-4f35-838d-c0b0946939d4" path="/var/lib/kubelet/pods/89559857-e73d-4f35-838d-c0b0946939d4/volumes" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.837173 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.837323 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.918994 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.918885 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.919663 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921049 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921137 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921206 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921564 4881 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921614 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921658 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921691 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.930358 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.022804 4881 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.022842 4881 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.022854 4881 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.022863 4881 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.384224 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.384715 4881 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1" exitCode=137 Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.384917 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.384976 4881 scope.go:117] "RemoveContainer" containerID="ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.417310 4881 scope.go:117] "RemoveContainer" containerID="ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1" Jan 21 11:02:02 crc kubenswrapper[4881]: E0121 11:02:02.418077 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1\": container with ID starting with ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1 not found: ID does not exist" containerID="ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.418138 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1"} err="failed to get container status \"ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1\": rpc error: code = NotFound desc = could not find container \"ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1\": container with ID starting with ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1 not found: ID does not exist" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.959968 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.960325 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q6dn5" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="registry-server" containerID="cri-o://e42581773a8d4ea1772dd60eaf9071bf2de0cdd39b8e134e5ac5a682d95b642f" gracePeriod=30 Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.974186 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.974604 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v5n2s" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="registry-server" containerID="cri-o://091b8c7421a6daba2d38abc6600200f92a99a9d9fffb2a18673337cc1cab5a28" gracePeriod=30 Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.998121 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.998418 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" containerID="cri-o://814fc7d7b657d30002e0169875973f3d65029d02d56ac8702f4d08fa12940079" gracePeriod=30 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.005621 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.006027 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-89m75" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="registry-server" containerID="cri-o://d4c87b729f18eaf9f12531e5147374286d6a7a44e910d96df5b3275a242bc490" gracePeriod=30 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.016497 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.016913 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kfmhs" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="registry-server" containerID="cri-o://ea62c10cfd248c0ef9c6d0347f5a3b0a2b7e8d1e35c546c01d7fdadf484cb508" gracePeriod=30 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.059100 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vrcvz"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.060661 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.062634 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vrcvz"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.140131 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.140631 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqb8m\" (UniqueName: \"kubernetes.io/projected/98f0e6fe-f27f-4d75-9149-6238b2220849-kube-api-access-mqb8m\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.140698 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.242439 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.242511 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqb8m\" (UniqueName: \"kubernetes.io/projected/98f0e6fe-f27f-4d75-9149-6238b2220849-kube-api-access-mqb8m\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.242561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.252373 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.271123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.277577 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqb8m\" (UniqueName: \"kubernetes.io/projected/98f0e6fe-f27f-4d75-9149-6238b2220849-kube-api-access-mqb8m\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.319698 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.320248 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.336801 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.336875 4881 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1c8815a8-fd68-4185-92ad-520c398cd927" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.345111 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.345162 4881 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1c8815a8-fd68-4185-92ad-520c398cd927" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.403414 4881 generic.go:334] "Generic (PLEG): container finished" podID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerID="e42581773a8d4ea1772dd60eaf9071bf2de0cdd39b8e134e5ac5a682d95b642f" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.403461 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerDied","Data":"e42581773a8d4ea1772dd60eaf9071bf2de0cdd39b8e134e5ac5a682d95b642f"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.408752 4881 generic.go:334] "Generic (PLEG): container finished" podID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerID="d4c87b729f18eaf9f12531e5147374286d6a7a44e910d96df5b3275a242bc490" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.408846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerDied","Data":"d4c87b729f18eaf9f12531e5147374286d6a7a44e910d96df5b3275a242bc490"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.410434 4881 generic.go:334] "Generic (PLEG): container finished" podID="e94f1e92-21b2-44c9-b499-b879850c288d" containerID="814fc7d7b657d30002e0169875973f3d65029d02d56ac8702f4d08fa12940079" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.410492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" event={"ID":"e94f1e92-21b2-44c9-b499-b879850c288d","Type":"ContainerDied","Data":"814fc7d7b657d30002e0169875973f3d65029d02d56ac8702f4d08fa12940079"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.423245 4881 generic.go:334] "Generic (PLEG): container finished" podID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerID="091b8c7421a6daba2d38abc6600200f92a99a9d9fffb2a18673337cc1cab5a28" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.423350 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerDied","Data":"091b8c7421a6daba2d38abc6600200f92a99a9d9fffb2a18673337cc1cab5a28"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.423422 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerDied","Data":"79b5df43169324987a329525742a5078ed6a8e75640eab433d3baf2cf413407f"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.423436 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b5df43169324987a329525742a5078ed6a8e75640eab433d3baf2cf413407f" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.426746 4881 generic.go:334] "Generic (PLEG): container finished" podID="d318e830-067f-4722-9d74-a45fcefc939d" containerID="ea62c10cfd248c0ef9c6d0347f5a3b0a2b7e8d1e35c546c01d7fdadf484cb508" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.426808 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerDied","Data":"ea62c10cfd248c0ef9c6d0347f5a3b0a2b7e8d1e35c546c01d7fdadf484cb508"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.437067 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.441329 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.497690 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.547834 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities\") pod \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.547935 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities\") pod \"8e002e57-13ab-477a-9e16-980e13b5e47f\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.547969 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g42w8\" (UniqueName: \"kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8\") pod \"8e002e57-13ab-477a-9e16-980e13b5e47f\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.547998 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content\") pod \"8e002e57-13ab-477a-9e16-980e13b5e47f\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.548029 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf89m\" (UniqueName: \"kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m\") pod \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.548073 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content\") pod \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.548784 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities" (OuterVolumeSpecName: "utilities") pod "8e002e57-13ab-477a-9e16-980e13b5e47f" (UID: "8e002e57-13ab-477a-9e16-980e13b5e47f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.566184 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m" (OuterVolumeSpecName: "kube-api-access-mf89m") pod "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" (UID: "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a"). InnerVolumeSpecName "kube-api-access-mf89m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.566355 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8" (OuterVolumeSpecName: "kube-api-access-g42w8") pod "8e002e57-13ab-477a-9e16-980e13b5e47f" (UID: "8e002e57-13ab-477a-9e16-980e13b5e47f"). InnerVolumeSpecName "kube-api-access-g42w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.570695 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities" (OuterVolumeSpecName: "utilities") pod "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" (UID: "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.611687 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e002e57-13ab-477a-9e16-980e13b5e47f" (UID: "8e002e57-13ab-477a-9e16-980e13b5e47f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.617884 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" (UID: "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650608 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650661 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650671 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g42w8\" (UniqueName: \"kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650684 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650694 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf89m\" (UniqueName: \"kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650701 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.725916 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.734364 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.752166 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2qtc\" (UniqueName: \"kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc\") pod \"075db786-6ad0-4982-b70e-bd05d4f240ec\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.752330 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities\") pod \"075db786-6ad0-4982-b70e-bd05d4f240ec\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.752440 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content\") pod \"075db786-6ad0-4982-b70e-bd05d4f240ec\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.754039 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities" (OuterVolumeSpecName: "utilities") pod "075db786-6ad0-4982-b70e-bd05d4f240ec" (UID: "075db786-6ad0-4982-b70e-bd05d4f240ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.757666 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc" (OuterVolumeSpecName: "kube-api-access-q2qtc") pod "075db786-6ad0-4982-b70e-bd05d4f240ec" (UID: "075db786-6ad0-4982-b70e-bd05d4f240ec"). InnerVolumeSpecName "kube-api-access-q2qtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.762652 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.800820 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "075db786-6ad0-4982-b70e-bd05d4f240ec" (UID: "075db786-6ad0-4982-b70e-bd05d4f240ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856106 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc6f2\" (UniqueName: \"kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2\") pod \"d318e830-067f-4722-9d74-a45fcefc939d\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics\") pod \"e94f1e92-21b2-44c9-b499-b879850c288d\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856259 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca\") pod \"e94f1e92-21b2-44c9-b499-b879850c288d\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856295 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content\") pod \"d318e830-067f-4722-9d74-a45fcefc939d\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856348 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities\") pod \"d318e830-067f-4722-9d74-a45fcefc939d\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856465 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-995tp\" (UniqueName: \"kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp\") pod \"e94f1e92-21b2-44c9-b499-b879850c288d\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.857085 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.857119 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2qtc\" (UniqueName: \"kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.857137 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.857457 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e94f1e92-21b2-44c9-b499-b879850c288d" (UID: "e94f1e92-21b2-44c9-b499-b879850c288d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.858364 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities" (OuterVolumeSpecName: "utilities") pod "d318e830-067f-4722-9d74-a45fcefc939d" (UID: "d318e830-067f-4722-9d74-a45fcefc939d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.863550 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp" (OuterVolumeSpecName: "kube-api-access-995tp") pod "e94f1e92-21b2-44c9-b499-b879850c288d" (UID: "e94f1e92-21b2-44c9-b499-b879850c288d"). InnerVolumeSpecName "kube-api-access-995tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.864263 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2" (OuterVolumeSpecName: "kube-api-access-fc6f2") pod "d318e830-067f-4722-9d74-a45fcefc939d" (UID: "d318e830-067f-4722-9d74-a45fcefc939d"). InnerVolumeSpecName "kube-api-access-fc6f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.868334 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e94f1e92-21b2-44c9-b499-b879850c288d" (UID: "e94f1e92-21b2-44c9-b499-b879850c288d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.959452 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc6f2\" (UniqueName: \"kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.959520 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.959542 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.959560 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.959574 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-995tp\" (UniqueName: \"kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.983859 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vrcvz"] Jan 21 11:02:03 crc kubenswrapper[4881]: W0121 11:02:03.990099 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f0e6fe_f27f_4d75_9149_6238b2220849.slice/crio-ea34048b1edcabeb6567b730e6cb5d995f3b84ecb21eb2f187130d4fa8f74bc3 WatchSource:0}: Error finding container ea34048b1edcabeb6567b730e6cb5d995f3b84ecb21eb2f187130d4fa8f74bc3: Status 404 returned error can't find the container with id ea34048b1edcabeb6567b730e6cb5d995f3b84ecb21eb2f187130d4fa8f74bc3 Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.030547 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d318e830-067f-4722-9d74-a45fcefc939d" (UID: "d318e830-067f-4722-9d74-a45fcefc939d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.062827 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.437710 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.437719 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" event={"ID":"e94f1e92-21b2-44c9-b499-b879850c288d","Type":"ContainerDied","Data":"123c57f996d77041997b15262c61902d2eed5d15c9314dac5b070f52214a0ad3"} Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.437833 4881 scope.go:117] "RemoveContainer" containerID="814fc7d7b657d30002e0169875973f3d65029d02d56ac8702f4d08fa12940079" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.441946 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" event={"ID":"98f0e6fe-f27f-4d75-9149-6238b2220849","Type":"ContainerStarted","Data":"ea34048b1edcabeb6567b730e6cb5d995f3b84ecb21eb2f187130d4fa8f74bc3"} Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.448389 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerDied","Data":"b87ddedd309d60e82b2425e90c86377b7db5b6d93701316fb318e5a216d01095"} Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.448439 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.453614 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerDied","Data":"a5c87f9c9c2e9ea53443d498b2b01400a8b6111456d79eeb2d2d4b28aa714ca1"} Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.453722 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.458384 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.458699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerDied","Data":"97ca6fad994e892affd0e053e6d3515afda4b44ce01474758415dca871d6c00b"} Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.459062 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.462295 4881 scope.go:117] "RemoveContainer" containerID="ea62c10cfd248c0ef9c6d0347f5a3b0a2b7e8d1e35c546c01d7fdadf484cb508" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.490030 4881 scope.go:117] "RemoveContainer" containerID="456438ece135082aa65a1f9d3e1df54da4ad18d3ac41d1e2ac75d98b61443cef" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.496258 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.506402 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.513260 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.518993 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.523259 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.533394 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.537693 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.543226 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.546248 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.548936 4881 scope.go:117] "RemoveContainer" containerID="b9a009384ba81492213bce1a87a61e1b83f262354a9aea725ad849bc0749a5f7" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.549050 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.562307 4881 scope.go:117] "RemoveContainer" containerID="e42581773a8d4ea1772dd60eaf9071bf2de0cdd39b8e134e5ac5a682d95b642f" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.584992 4881 scope.go:117] "RemoveContainer" containerID="cad9f8570b6b7c8359172ebecd350bcad67cfe5e05e5aeca3f0a038ec3357bb5" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.607719 4881 scope.go:117] "RemoveContainer" containerID="1ccb96495e693b437b8f3969fa58a55b9e7011c267f14a44820d1cfd34daabf3" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.636419 4881 scope.go:117] "RemoveContainer" containerID="d4c87b729f18eaf9f12531e5147374286d6a7a44e910d96df5b3275a242bc490" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.656009 4881 scope.go:117] "RemoveContainer" containerID="a06c8d6c70785e0e51b0e238072a99f6a50caf04a590fb7ba69cc08788ffee9a" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.693053 4881 scope.go:117] "RemoveContainer" containerID="aa990b30489b423fbac7484510b784c9211e2f63bd3366b894aa031bc0754115" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.079584 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7wxr8"] Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080426 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080443 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080457 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080463 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080473 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080479 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080487 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080493 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080500 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080505 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080514 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080522 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080533 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080539 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080572 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080580 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080588 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080594 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080602 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080608 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080619 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080625 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080633 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080639 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080648 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080653 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080753 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080768 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080778 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080808 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080819 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.081998 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: W0121 11:02:05.085609 4881 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.085657 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.094923 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wxr8"] Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.179153 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-catalog-content\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.179250 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-utilities\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.179296 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxc6x\" (UniqueName: \"kubernetes.io/projected/6e9defc7-ad37-4742-b149-cb71d7ea177a-kube-api-access-wxc6x\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.280325 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-catalog-content\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.280397 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-utilities\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.280444 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxc6x\" (UniqueName: \"kubernetes.io/projected/6e9defc7-ad37-4742-b149-cb71d7ea177a-kube-api-access-wxc6x\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.281363 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-catalog-content\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.281390 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-utilities\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.314022 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxc6x\" (UniqueName: \"kubernetes.io/projected/6e9defc7-ad37-4742-b149-cb71d7ea177a-kube-api-access-wxc6x\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.319761 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" path="/var/lib/kubelet/pods/075db786-6ad0-4982-b70e-bd05d4f240ec/volumes" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.320720 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" path="/var/lib/kubelet/pods/8e002e57-13ab-477a-9e16-980e13b5e47f/volumes" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.321448 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d318e830-067f-4722-9d74-a45fcefc939d" path="/var/lib/kubelet/pods/d318e830-067f-4722-9d74-a45fcefc939d/volumes" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.322614 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" path="/var/lib/kubelet/pods/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a/volumes" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.323301 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" path="/var/lib/kubelet/pods/e94f1e92-21b2-44c9-b499-b879850c288d/volumes" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.465493 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" event={"ID":"98f0e6fe-f27f-4d75-9149-6238b2220849","Type":"ContainerStarted","Data":"3d438dff4284b7b3533355ae936f073ed95243d784cbf4ae5e7206dc38abc68d"} Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.465808 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.471657 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.486219 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" podStartSLOduration=3.486196576 podStartE2EDuration="3.486196576s" podCreationTimestamp="2026-01-21 11:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:02:05.484261406 +0000 UTC m=+312.744217875" watchObservedRunningTime="2026-01-21 11:02:05.486196576 +0000 UTC m=+312.746153045" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.318674 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.326172 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.816608 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wxr8"] Jan 21 11:02:06 crc kubenswrapper[4881]: W0121 11:02:06.824071 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e9defc7_ad37_4742_b149_cb71d7ea177a.slice/crio-fa83766f89d1616cf56747b49c2fcf160a37e27aa6ba9e86f2b0cf1ec797c327 WatchSource:0}: Error finding container fa83766f89d1616cf56747b49c2fcf160a37e27aa6ba9e86f2b0cf1ec797c327: Status 404 returned error can't find the container with id fa83766f89d1616cf56747b49c2fcf160a37e27aa6ba9e86f2b0cf1ec797c327 Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.879014 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9gj"] Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.880308 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.884518 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.898916 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9gj"] Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.904072 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-catalog-content\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.904137 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqspx\" (UniqueName: \"kubernetes.io/projected/c6d87675-513f-412d-a34c-d789cce5b4e8-kube-api-access-pqspx\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.904362 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-utilities\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.006100 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-catalog-content\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.006158 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqspx\" (UniqueName: \"kubernetes.io/projected/c6d87675-513f-412d-a34c-d789cce5b4e8-kube-api-access-pqspx\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.006206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-utilities\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.006702 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-utilities\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.006955 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-catalog-content\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.040383 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqspx\" (UniqueName: \"kubernetes.io/projected/c6d87675-513f-412d-a34c-d789cce5b4e8-kube-api-access-pqspx\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.223358 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.481051 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kfzl8"] Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.482923 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.492979 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfzl8"] Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.493297 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.494578 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e9defc7-ad37-4742-b149-cb71d7ea177a" containerID="33e03055f6685a2d8d66bf472cdde01237efd3237849c8e149705b78539ac11b" exitCode=0 Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.495426 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxr8" event={"ID":"6e9defc7-ad37-4742-b149-cb71d7ea177a","Type":"ContainerDied","Data":"33e03055f6685a2d8d66bf472cdde01237efd3237849c8e149705b78539ac11b"} Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.495459 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxr8" event={"ID":"6e9defc7-ad37-4742-b149-cb71d7ea177a","Type":"ContainerStarted","Data":"fa83766f89d1616cf56747b49c2fcf160a37e27aa6ba9e86f2b0cf1ec797c327"} Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.512868 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ds4w\" (UniqueName: \"kubernetes.io/projected/8ab3938c-6614-4877-a94c-75b90f339523-kube-api-access-9ds4w\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.512934 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-utilities\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.512982 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-catalog-content\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.614412 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ds4w\" (UniqueName: \"kubernetes.io/projected/8ab3938c-6614-4877-a94c-75b90f339523-kube-api-access-9ds4w\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.614479 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-utilities\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.614545 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-catalog-content\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.615100 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-utilities\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.615160 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-catalog-content\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.774843 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9gj"] Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.777689 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ds4w\" (UniqueName: \"kubernetes.io/projected/8ab3938c-6614-4877-a94c-75b90f339523-kube-api-access-9ds4w\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.860151 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.289229 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfzl8"] Jan 21 11:02:08 crc kubenswrapper[4881]: W0121 11:02:08.294227 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ab3938c_6614_4877_a94c_75b90f339523.slice/crio-88693e4459975d71f2437f1140fa85449acac7a24f76403599ddaf3666aae16f WatchSource:0}: Error finding container 88693e4459975d71f2437f1140fa85449acac7a24f76403599ddaf3666aae16f: Status 404 returned error can't find the container with id 88693e4459975d71f2437f1140fa85449acac7a24f76403599ddaf3666aae16f Jan 21 11:02:08 crc kubenswrapper[4881]: E0121 11:02:08.445514 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.503554 4881 generic.go:334] "Generic (PLEG): container finished" podID="8ab3938c-6614-4877-a94c-75b90f339523" containerID="80ed99dabfcdf4861f6392eac676390bb9f707460dae3cb2412782ac0dea7ce7" exitCode=0 Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.503675 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfzl8" event={"ID":"8ab3938c-6614-4877-a94c-75b90f339523","Type":"ContainerDied","Data":"80ed99dabfcdf4861f6392eac676390bb9f707460dae3cb2412782ac0dea7ce7"} Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.503975 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfzl8" event={"ID":"8ab3938c-6614-4877-a94c-75b90f339523","Type":"ContainerStarted","Data":"88693e4459975d71f2437f1140fa85449acac7a24f76403599ddaf3666aae16f"} Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.505940 4881 generic.go:334] "Generic (PLEG): container finished" podID="c6d87675-513f-412d-a34c-d789cce5b4e8" containerID="f21d4cc6fd187e6ec66292e99a2bb2ca06f019c39a2d6d6b3adc53079835eb38" exitCode=0 Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.505997 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9gj" event={"ID":"c6d87675-513f-412d-a34c-d789cce5b4e8","Type":"ContainerDied","Data":"f21d4cc6fd187e6ec66292e99a2bb2ca06f019c39a2d6d6b3adc53079835eb38"} Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.506042 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9gj" event={"ID":"c6d87675-513f-412d-a34c-d789cce5b4e8","Type":"ContainerStarted","Data":"2424d1e04d00485140739da64c8bc221515f617d68355bbb5c646d9660b39e0f"} Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.277703 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bn24k"] Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.281238 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.284000 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.294200 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bn24k"] Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.340915 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-catalog-content\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.341479 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n76l\" (UniqueName: \"kubernetes.io/projected/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-kube-api-access-7n76l\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.341530 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-utilities\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.443158 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-catalog-content\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.443244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n76l\" (UniqueName: \"kubernetes.io/projected/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-kube-api-access-7n76l\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.443299 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-utilities\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.443875 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-catalog-content\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.444052 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-utilities\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.470004 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n76l\" (UniqueName: \"kubernetes.io/projected/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-kube-api-access-7n76l\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.525246 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e9defc7-ad37-4742-b149-cb71d7ea177a" containerID="4db026c7a3931d2831df7d16599a8c6dcf49b2a19182776365bc55b2b2f46493" exitCode=0 Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.525320 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxr8" event={"ID":"6e9defc7-ad37-4742-b149-cb71d7ea177a","Type":"ContainerDied","Data":"4db026c7a3931d2831df7d16599a8c6dcf49b2a19182776365bc55b2b2f46493"} Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.601422 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.019602 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bn24k"] Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.539565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn24k" event={"ID":"cb2faf64-08ef-4413-84f0-10e88dcb7a8f","Type":"ContainerDied","Data":"2bc3a6833c19a70d3aefa8d3c7bda35cb891f30c489f62da688f653e6d7c4048"} Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.539388 4881 generic.go:334] "Generic (PLEG): container finished" podID="cb2faf64-08ef-4413-84f0-10e88dcb7a8f" containerID="2bc3a6833c19a70d3aefa8d3c7bda35cb891f30c489f62da688f653e6d7c4048" exitCode=0 Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.541225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn24k" event={"ID":"cb2faf64-08ef-4413-84f0-10e88dcb7a8f","Type":"ContainerStarted","Data":"8d46bedb9408c2dc616eea8f07cc08e082f36cfe66f9e1afcb0ddd050f15dd6e"} Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.544567 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfzl8" event={"ID":"8ab3938c-6614-4877-a94c-75b90f339523","Type":"ContainerStarted","Data":"142270a0f15473b6b15a9291d78a9ba2f0025e0134ceb84d54d49e6513c177a4"} Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.546807 4881 generic.go:334] "Generic (PLEG): container finished" podID="c6d87675-513f-412d-a34c-d789cce5b4e8" containerID="b5293e61a579622e926dcba79f271c961ed1e83eaf9a6ba92c4789455fe018fa" exitCode=0 Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.546882 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9gj" event={"ID":"c6d87675-513f-412d-a34c-d789cce5b4e8","Type":"ContainerDied","Data":"b5293e61a579622e926dcba79f271c961ed1e83eaf9a6ba92c4789455fe018fa"} Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.555027 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxr8" event={"ID":"6e9defc7-ad37-4742-b149-cb71d7ea177a","Type":"ContainerStarted","Data":"0e6453a359c5a4e747e31e98eddd534a0b0eb94099fbb500453c3b01a577db1a"} Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.646762 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7wxr8" podStartSLOduration=3.086584767 podStartE2EDuration="5.646742865s" podCreationTimestamp="2026-01-21 11:02:05 +0000 UTC" firstStartedPulling="2026-01-21 11:02:07.498424984 +0000 UTC m=+314.758381453" lastFinishedPulling="2026-01-21 11:02:10.058583072 +0000 UTC m=+317.318539551" observedRunningTime="2026-01-21 11:02:10.619742225 +0000 UTC m=+317.879698704" watchObservedRunningTime="2026-01-21 11:02:10.646742865 +0000 UTC m=+317.906699354" Jan 21 11:02:11 crc kubenswrapper[4881]: I0121 11:02:11.564761 4881 generic.go:334] "Generic (PLEG): container finished" podID="8ab3938c-6614-4877-a94c-75b90f339523" containerID="142270a0f15473b6b15a9291d78a9ba2f0025e0134ceb84d54d49e6513c177a4" exitCode=0 Jan 21 11:02:11 crc kubenswrapper[4881]: I0121 11:02:11.564842 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfzl8" event={"ID":"8ab3938c-6614-4877-a94c-75b90f339523","Type":"ContainerDied","Data":"142270a0f15473b6b15a9291d78a9ba2f0025e0134ceb84d54d49e6513c177a4"} Jan 21 11:02:11 crc kubenswrapper[4881]: I0121 11:02:11.571695 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9gj" event={"ID":"c6d87675-513f-412d-a34c-d789cce5b4e8","Type":"ContainerStarted","Data":"9eb18af2f3ac618610e0f5f123310ad2b3628cc38f624ea02bf868b24d18591d"} Jan 21 11:02:11 crc kubenswrapper[4881]: I0121 11:02:11.619511 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rs9gj" podStartSLOduration=3.144473123 podStartE2EDuration="5.619491123s" podCreationTimestamp="2026-01-21 11:02:06 +0000 UTC" firstStartedPulling="2026-01-21 11:02:08.508622882 +0000 UTC m=+315.768579361" lastFinishedPulling="2026-01-21 11:02:10.983640892 +0000 UTC m=+318.243597361" observedRunningTime="2026-01-21 11:02:11.618466847 +0000 UTC m=+318.878423326" watchObservedRunningTime="2026-01-21 11:02:11.619491123 +0000 UTC m=+318.879447592" Jan 21 11:02:14 crc kubenswrapper[4881]: I0121 11:02:14.595355 4881 generic.go:334] "Generic (PLEG): container finished" podID="cb2faf64-08ef-4413-84f0-10e88dcb7a8f" containerID="b3e533d4d70488faedff073733cc253d326f53f9694186d0d0cf9f09a4fc6782" exitCode=0 Jan 21 11:02:14 crc kubenswrapper[4881]: I0121 11:02:14.595473 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn24k" event={"ID":"cb2faf64-08ef-4413-84f0-10e88dcb7a8f","Type":"ContainerDied","Data":"b3e533d4d70488faedff073733cc253d326f53f9694186d0d0cf9f09a4fc6782"} Jan 21 11:02:14 crc kubenswrapper[4881]: I0121 11:02:14.599855 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfzl8" event={"ID":"8ab3938c-6614-4877-a94c-75b90f339523","Type":"ContainerStarted","Data":"6c985b0a85d51bc19103867cc9f550fc4307bd820ffe6880eab65e8191d76ff5"} Jan 21 11:02:14 crc kubenswrapper[4881]: I0121 11:02:14.646454 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kfzl8" podStartSLOduration=2.337105189 podStartE2EDuration="7.646425116s" podCreationTimestamp="2026-01-21 11:02:07 +0000 UTC" firstStartedPulling="2026-01-21 11:02:08.508932271 +0000 UTC m=+315.768888780" lastFinishedPulling="2026-01-21 11:02:13.818252238 +0000 UTC m=+321.078208707" observedRunningTime="2026-01-21 11:02:14.643916231 +0000 UTC m=+321.903872720" watchObservedRunningTime="2026-01-21 11:02:14.646425116 +0000 UTC m=+321.906381585" Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.327425 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.328675 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.406437 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.614232 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn24k" event={"ID":"cb2faf64-08ef-4413-84f0-10e88dcb7a8f","Type":"ContainerStarted","Data":"72d93ab1b3e1b04224e69f553bae54791b77965d7fbd59e56d289adec26cd444"} Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.644529 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bn24k" podStartSLOduration=2.506498622 podStartE2EDuration="7.644509046s" podCreationTimestamp="2026-01-21 11:02:09 +0000 UTC" firstStartedPulling="2026-01-21 11:02:10.542281716 +0000 UTC m=+317.802238185" lastFinishedPulling="2026-01-21 11:02:15.68029214 +0000 UTC m=+322.940248609" observedRunningTime="2026-01-21 11:02:16.641341354 +0000 UTC m=+323.901297833" watchObservedRunningTime="2026-01-21 11:02:16.644509046 +0000 UTC m=+323.904465515" Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.765522 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.765815 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" podUID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" containerName="route-controller-manager" containerID="cri-o://03285c7f75ca0c5ea5fc4bbbace73cfbfd25315c2b430af309cd5af6d0d8503a" gracePeriod=30 Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.800434 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.224457 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.224552 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.276466 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.625411 4881 generic.go:334] "Generic (PLEG): container finished" podID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" containerID="03285c7f75ca0c5ea5fc4bbbace73cfbfd25315c2b430af309cd5af6d0d8503a" exitCode=0 Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.626715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" event={"ID":"b1ebf4ad-7b0d-4711-93bd-206ec36e7202","Type":"ContainerDied","Data":"03285c7f75ca0c5ea5fc4bbbace73cfbfd25315c2b430af309cd5af6d0d8503a"} Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.688919 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.822397 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.856000 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw"] Jan 21 11:02:17 crc kubenswrapper[4881]: E0121 11:02:17.856306 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" containerName="route-controller-manager" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.856331 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" containerName="route-controller-manager" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.856507 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" containerName="route-controller-manager" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.857105 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.862946 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.863273 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.871777 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw"] Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935124 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert\") pod \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935239 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config\") pod \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935304 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca\") pod \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935464 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wx6k\" (UniqueName: \"kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k\") pod \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935803 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91582ca-0d6d-4ed9-91bd-fdad383a8758-serving-cert\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935843 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt8n9\" (UniqueName: \"kubernetes.io/projected/a91582ca-0d6d-4ed9-91bd-fdad383a8758-kube-api-access-kt8n9\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935918 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-config\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.936001 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-client-ca\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.936660 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config" (OuterVolumeSpecName: "config") pod "b1ebf4ad-7b0d-4711-93bd-206ec36e7202" (UID: "b1ebf4ad-7b0d-4711-93bd-206ec36e7202"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.936610 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca" (OuterVolumeSpecName: "client-ca") pod "b1ebf4ad-7b0d-4711-93bd-206ec36e7202" (UID: "b1ebf4ad-7b0d-4711-93bd-206ec36e7202"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.946406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b1ebf4ad-7b0d-4711-93bd-206ec36e7202" (UID: "b1ebf4ad-7b0d-4711-93bd-206ec36e7202"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.947212 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k" (OuterVolumeSpecName: "kube-api-access-6wx6k") pod "b1ebf4ad-7b0d-4711-93bd-206ec36e7202" (UID: "b1ebf4ad-7b0d-4711-93bd-206ec36e7202"). InnerVolumeSpecName "kube-api-access-6wx6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-config\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037715 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-client-ca\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037760 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91582ca-0d6d-4ed9-91bd-fdad383a8758-serving-cert\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037806 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt8n9\" (UniqueName: \"kubernetes.io/projected/a91582ca-0d6d-4ed9-91bd-fdad383a8758-kube-api-access-kt8n9\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037908 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wx6k\" (UniqueName: \"kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037932 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037946 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037959 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.038944 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-client-ca\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.039072 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-config\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.042342 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91582ca-0d6d-4ed9-91bd-fdad383a8758-serving-cert\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.061183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt8n9\" (UniqueName: \"kubernetes.io/projected/a91582ca-0d6d-4ed9-91bd-fdad383a8758-kube-api-access-kt8n9\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.177141 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.639641 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" event={"ID":"b1ebf4ad-7b0d-4711-93bd-206ec36e7202","Type":"ContainerDied","Data":"cf1ccaca8e9193a4546c7cd1215ccba45fb7b47029b1d20906ee6e97c1d22afe"} Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.640215 4881 scope.go:117] "RemoveContainer" containerID="03285c7f75ca0c5ea5fc4bbbace73cfbfd25315c2b430af309cd5af6d0d8503a" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.640148 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.675208 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw"] Jan 21 11:02:18 crc kubenswrapper[4881]: E0121 11:02:18.686262 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.692694 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.701023 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.906177 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kfzl8" podUID="8ab3938c-6614-4877-a94c-75b90f339523" containerName="registry-server" probeResult="failure" output=< Jan 21 11:02:18 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:02:18 crc kubenswrapper[4881]: > Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.320215 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" path="/var/lib/kubelet/pods/b1ebf4ad-7b0d-4711-93bd-206ec36e7202/volumes" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.602460 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.602528 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.648347 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" event={"ID":"a91582ca-0d6d-4ed9-91bd-fdad383a8758","Type":"ContainerStarted","Data":"00e00e53c8a8435e0245f2df4afdd5939a672ed22efb269424a38149036c2228"} Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.649450 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" event={"ID":"a91582ca-0d6d-4ed9-91bd-fdad383a8758","Type":"ContainerStarted","Data":"e56dcc3cfafeda7c1b921610ed5ce11b403f59be14f8f975934cc18b0f5f6f01"} Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.649483 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.667592 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.677333 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" podStartSLOduration=3.677283061 podStartE2EDuration="3.677283061s" podCreationTimestamp="2026-01-21 11:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:02:19.676470731 +0000 UTC m=+326.936427190" watchObservedRunningTime="2026-01-21 11:02:19.677283061 +0000 UTC m=+326.937239530" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.825535 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:27 crc kubenswrapper[4881]: I0121 11:02:27.911161 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:27 crc kubenswrapper[4881]: I0121 11:02:27.965202 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:28 crc kubenswrapper[4881]: E0121 11:02:28.824186 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:02:29 crc kubenswrapper[4881]: I0121 11:02:29.645934 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:38 crc kubenswrapper[4881]: E0121 11:02:38.977248 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.619588 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lh85c"] Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.620719 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.636922 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lh85c"] Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.714846 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-bound-sa-token\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715512 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715553 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715578 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-certificates\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715594 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715614 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-trusted-ca\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715630 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2ws\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-kube-api-access-rd2ws\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715657 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-tls\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.754771 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.817894 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.818874 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-certificates\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.818961 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.819063 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-trusted-ca\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.819109 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd2ws\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-kube-api-access-rd2ws\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.819111 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.819250 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-tls\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.819344 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-bound-sa-token\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.820746 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-certificates\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.821129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-trusted-ca\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.828934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-tls\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.829713 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.837242 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-bound-sa-token\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.838903 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd2ws\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-kube-api-access-rd2ws\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.942116 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:43 crc kubenswrapper[4881]: I0121 11:02:43.378175 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lh85c"] Jan 21 11:02:43 crc kubenswrapper[4881]: I0121 11:02:43.820795 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" event={"ID":"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa","Type":"ContainerStarted","Data":"dc30c29d5d8e02dfaa22bbf78b9e3f9bf16a636a23423878f9927c0a8128eba4"} Jan 21 11:02:44 crc kubenswrapper[4881]: I0121 11:02:44.832025 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" event={"ID":"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa","Type":"ContainerStarted","Data":"8019f2e642a1262fa8ab8b87531ffad064f8fef236a2da2d0aabe26186baff21"} Jan 21 11:02:44 crc kubenswrapper[4881]: I0121 11:02:44.832649 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:44 crc kubenswrapper[4881]: I0121 11:02:44.866660 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" podStartSLOduration=2.866623023 podStartE2EDuration="2.866623023s" podCreationTimestamp="2026-01-21 11:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:02:44.860868233 +0000 UTC m=+352.120824712" watchObservedRunningTime="2026-01-21 11:02:44.866623023 +0000 UTC m=+352.126579492" Jan 21 11:02:49 crc kubenswrapper[4881]: E0121 11:02:49.120401 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:02:59 crc kubenswrapper[4881]: I0121 11:02:59.851535 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:02:59 crc kubenswrapper[4881]: I0121 11:02:59.852481 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:03:02 crc kubenswrapper[4881]: I0121 11:03:02.948928 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:03:03 crc kubenswrapper[4881]: I0121 11:03:03.016948 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.082104 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" podUID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" containerName="registry" containerID="cri-o://2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c" gracePeriod=30 Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.501957 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.508703 4881 generic.go:334] "Generic (PLEG): container finished" podID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" containerID="2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c" exitCode=0 Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.508768 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" event={"ID":"ec369bed-0b60-48b0-9de0-fcfd6ca7776d","Type":"ContainerDied","Data":"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c"} Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.508831 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" event={"ID":"ec369bed-0b60-48b0-9de0-fcfd6ca7776d","Type":"ContainerDied","Data":"5474c3ee513cde1d48c15d56d09e1c7f705a56319c7e90c496d397eeca80a458"} Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.508828 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.508855 4881 scope.go:117] "RemoveContainer" containerID="2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.540604 4881 scope.go:117] "RemoveContainer" containerID="2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c" Jan 21 11:03:28 crc kubenswrapper[4881]: E0121 11:03:28.543388 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c\": container with ID starting with 2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c not found: ID does not exist" containerID="2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.543881 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c"} err="failed to get container status \"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c\": rpc error: code = NotFound desc = could not find container \"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c\": container with ID starting with 2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c not found: ID does not exist" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642258 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642530 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642573 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642626 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6ljz\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642691 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642727 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642756 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642828 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.644548 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.652405 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.654022 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.654868 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.655700 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz" (OuterVolumeSpecName: "kube-api-access-z6ljz") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "kube-api-access-z6ljz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.655971 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.661375 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.663614 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.744929 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.744982 4881 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.744994 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6ljz\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.745041 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.745051 4881 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.745062 4881 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.745075 4881 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.843032 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.859099 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 11:03:29 crc kubenswrapper[4881]: I0121 11:03:29.320011 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" path="/var/lib/kubelet/pods/ec369bed-0b60-48b0-9de0-fcfd6ca7776d/volumes" Jan 21 11:03:29 crc kubenswrapper[4881]: I0121 11:03:29.851907 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:03:29 crc kubenswrapper[4881]: I0121 11:03:29.852489 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:03:59 crc kubenswrapper[4881]: I0121 11:03:59.851824 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:03:59 crc kubenswrapper[4881]: I0121 11:03:59.852776 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:03:59 crc kubenswrapper[4881]: I0121 11:03:59.852881 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:03:59 crc kubenswrapper[4881]: I0121 11:03:59.853829 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:03:59 crc kubenswrapper[4881]: I0121 11:03:59.853900 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1" gracePeriod=600 Jan 21 11:04:00 crc kubenswrapper[4881]: I0121 11:04:00.730805 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1" exitCode=0 Jan 21 11:04:00 crc kubenswrapper[4881]: I0121 11:04:00.730921 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1"} Jan 21 11:04:00 crc kubenswrapper[4881]: I0121 11:04:00.731821 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b"} Jan 21 11:04:00 crc kubenswrapper[4881]: I0121 11:04:00.731866 4881 scope.go:117] "RemoveContainer" containerID="7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d" Jan 21 11:04:53 crc kubenswrapper[4881]: I0121 11:04:53.891258 4881 scope.go:117] "RemoveContainer" containerID="8f66d538b15eac6e19eeb1b6e73b0917e7cb4600d289674a11496b4ddb805259" Jan 21 11:06:29 crc kubenswrapper[4881]: I0121 11:06:29.851510 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:06:29 crc kubenswrapper[4881]: I0121 11:06:29.852698 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:06:53 crc kubenswrapper[4881]: I0121 11:06:53.939955 4881 scope.go:117] "RemoveContainer" containerID="af52521bc076413d8e72a4c4cff88c04fc3be6a74567d99416c9a8f9f7a66758" Jan 21 11:06:53 crc kubenswrapper[4881]: I0121 11:06:53.980319 4881 scope.go:117] "RemoveContainer" containerID="091b8c7421a6daba2d38abc6600200f92a99a9d9fffb2a18673337cc1cab5a28" Jan 21 11:06:59 crc kubenswrapper[4881]: I0121 11:06:59.851658 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:06:59 crc kubenswrapper[4881]: I0121 11:06:59.852373 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:07:29 crc kubenswrapper[4881]: I0121 11:07:29.851683 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:07:29 crc kubenswrapper[4881]: I0121 11:07:29.852354 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:07:29 crc kubenswrapper[4881]: I0121 11:07:29.852415 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:07:29 crc kubenswrapper[4881]: I0121 11:07:29.853159 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:07:29 crc kubenswrapper[4881]: I0121 11:07:29.853248 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b" gracePeriod=600 Jan 21 11:07:30 crc kubenswrapper[4881]: I0121 11:07:30.143186 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b" exitCode=0 Jan 21 11:07:30 crc kubenswrapper[4881]: I0121 11:07:30.143294 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b"} Jan 21 11:07:30 crc kubenswrapper[4881]: I0121 11:07:30.143400 4881 scope.go:117] "RemoveContainer" containerID="f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1" Jan 21 11:07:31 crc kubenswrapper[4881]: I0121 11:07:31.153609 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16"} Jan 21 11:08:49 crc kubenswrapper[4881]: I0121 11:08:49.889683 4881 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.242101 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s"] Jan 21 11:09:04 crc kubenswrapper[4881]: E0121 11:09:04.244318 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" containerName="registry" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.244438 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" containerName="registry" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.244665 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" containerName="registry" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.245288 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.248557 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.252328 4881 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wtp5l" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.252587 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.262174 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-h2ttp"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.263197 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h2ttp" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.265730 4881 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fpfvh" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.269614 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.275440 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h2ttp"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.299411 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-csqtv"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.300400 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.306152 4881 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-nbb9f" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.317393 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-csqtv"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.362622 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s947\" (UniqueName: \"kubernetes.io/projected/faf7e95d-07e7-4d3d-936b-26b187fc0b0c-kube-api-access-5s947\") pod \"cert-manager-858654f9db-h2ttp\" (UID: \"faf7e95d-07e7-4d3d-936b-26b187fc0b0c\") " pod="cert-manager/cert-manager-858654f9db-h2ttp" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.362690 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l24bg\" (UniqueName: \"kubernetes.io/projected/1d8014cf-8827-449d-b5fa-d0c098cc377e-kube-api-access-l24bg\") pod \"cert-manager-cainjector-cf98fcc89-cdm4s\" (UID: \"1d8014cf-8827-449d-b5fa-d0c098cc377e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.464762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s947\" (UniqueName: \"kubernetes.io/projected/faf7e95d-07e7-4d3d-936b-26b187fc0b0c-kube-api-access-5s947\") pod \"cert-manager-858654f9db-h2ttp\" (UID: \"faf7e95d-07e7-4d3d-936b-26b187fc0b0c\") " pod="cert-manager/cert-manager-858654f9db-h2ttp" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.464860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l24bg\" (UniqueName: \"kubernetes.io/projected/1d8014cf-8827-449d-b5fa-d0c098cc377e-kube-api-access-l24bg\") pod \"cert-manager-cainjector-cf98fcc89-cdm4s\" (UID: \"1d8014cf-8827-449d-b5fa-d0c098cc377e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.464931 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgl4w\" (UniqueName: \"kubernetes.io/projected/2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4-kube-api-access-lgl4w\") pod \"cert-manager-webhook-687f57d79b-csqtv\" (UID: \"2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.488541 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s947\" (UniqueName: \"kubernetes.io/projected/faf7e95d-07e7-4d3d-936b-26b187fc0b0c-kube-api-access-5s947\") pod \"cert-manager-858654f9db-h2ttp\" (UID: \"faf7e95d-07e7-4d3d-936b-26b187fc0b0c\") " pod="cert-manager/cert-manager-858654f9db-h2ttp" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.488618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l24bg\" (UniqueName: \"kubernetes.io/projected/1d8014cf-8827-449d-b5fa-d0c098cc377e-kube-api-access-l24bg\") pod \"cert-manager-cainjector-cf98fcc89-cdm4s\" (UID: \"1d8014cf-8827-449d-b5fa-d0c098cc377e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.565814 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.566227 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgl4w\" (UniqueName: \"kubernetes.io/projected/2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4-kube-api-access-lgl4w\") pod \"cert-manager-webhook-687f57d79b-csqtv\" (UID: \"2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.583073 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h2ttp" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.586436 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgl4w\" (UniqueName: \"kubernetes.io/projected/2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4-kube-api-access-lgl4w\") pod \"cert-manager-webhook-687f57d79b-csqtv\" (UID: \"2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.619713 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.908821 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-csqtv"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.923750 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:09:05 crc kubenswrapper[4881]: I0121 11:09:05.012041 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s"] Jan 21 11:09:05 crc kubenswrapper[4881]: W0121 11:09:05.021932 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d8014cf_8827_449d_b5fa_d0c098cc377e.slice/crio-d11a137a4d487b3787674cc7c05277ae88a77a2b6d288a5cc6a94e6b0be4df11 WatchSource:0}: Error finding container d11a137a4d487b3787674cc7c05277ae88a77a2b6d288a5cc6a94e6b0be4df11: Status 404 returned error can't find the container with id d11a137a4d487b3787674cc7c05277ae88a77a2b6d288a5cc6a94e6b0be4df11 Jan 21 11:09:05 crc kubenswrapper[4881]: I0121 11:09:05.062489 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h2ttp"] Jan 21 11:09:05 crc kubenswrapper[4881]: W0121 11:09:05.066363 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf7e95d_07e7_4d3d_936b_26b187fc0b0c.slice/crio-1f83a47acfbc835dde164804eff14272ef2b40ace6b303463f86bdf150b16ae1 WatchSource:0}: Error finding container 1f83a47acfbc835dde164804eff14272ef2b40ace6b303463f86bdf150b16ae1: Status 404 returned error can't find the container with id 1f83a47acfbc835dde164804eff14272ef2b40ace6b303463f86bdf150b16ae1 Jan 21 11:09:05 crc kubenswrapper[4881]: I0121 11:09:05.747662 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" event={"ID":"2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4","Type":"ContainerStarted","Data":"418f7c4757445d467f2ed9218b1861b0d514cd5a2f430ae2561534473ee1f49f"} Jan 21 11:09:05 crc kubenswrapper[4881]: I0121 11:09:05.749731 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" event={"ID":"1d8014cf-8827-449d-b5fa-d0c098cc377e","Type":"ContainerStarted","Data":"d11a137a4d487b3787674cc7c05277ae88a77a2b6d288a5cc6a94e6b0be4df11"} Jan 21 11:09:05 crc kubenswrapper[4881]: I0121 11:09:05.751028 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h2ttp" event={"ID":"faf7e95d-07e7-4d3d-936b-26b187fc0b0c","Type":"ContainerStarted","Data":"1f83a47acfbc835dde164804eff14272ef2b40ace6b303463f86bdf150b16ae1"} Jan 21 11:09:08 crc kubenswrapper[4881]: I0121 11:09:08.779799 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" event={"ID":"2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4","Type":"ContainerStarted","Data":"eae0c35d82930a00fe111e3513015ecf6b34c7f998296bd2aca0cd7bab741ad9"} Jan 21 11:09:08 crc kubenswrapper[4881]: I0121 11:09:08.780692 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:08 crc kubenswrapper[4881]: I0121 11:09:08.806249 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" podStartSLOduration=1.732344085 podStartE2EDuration="4.806216973s" podCreationTimestamp="2026-01-21 11:09:04 +0000 UTC" firstStartedPulling="2026-01-21 11:09:04.923532437 +0000 UTC m=+732.183488906" lastFinishedPulling="2026-01-21 11:09:07.997405325 +0000 UTC m=+735.257361794" observedRunningTime="2026-01-21 11:09:08.797046896 +0000 UTC m=+736.057003365" watchObservedRunningTime="2026-01-21 11:09:08.806216973 +0000 UTC m=+736.066173442" Jan 21 11:09:09 crc kubenswrapper[4881]: I0121 11:09:09.788551 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" event={"ID":"1d8014cf-8827-449d-b5fa-d0c098cc377e","Type":"ContainerStarted","Data":"da7dcfda8047a2fe8f0f19443f177b4697d37e30ea4a5e9c8911abd0ed087d28"} Jan 21 11:09:09 crc kubenswrapper[4881]: I0121 11:09:09.790500 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h2ttp" event={"ID":"faf7e95d-07e7-4d3d-936b-26b187fc0b0c","Type":"ContainerStarted","Data":"bf0b79d023e0d95935fe58142c4d76a87be786faf89630d7e53563d975f0c8e3"} Jan 21 11:09:09 crc kubenswrapper[4881]: I0121 11:09:09.807207 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" podStartSLOduration=1.43455704 podStartE2EDuration="5.80718568s" podCreationTimestamp="2026-01-21 11:09:04 +0000 UTC" firstStartedPulling="2026-01-21 11:09:05.024520413 +0000 UTC m=+732.284476882" lastFinishedPulling="2026-01-21 11:09:09.397149053 +0000 UTC m=+736.657105522" observedRunningTime="2026-01-21 11:09:09.802988591 +0000 UTC m=+737.062945060" watchObservedRunningTime="2026-01-21 11:09:09.80718568 +0000 UTC m=+737.067142149" Jan 21 11:09:09 crc kubenswrapper[4881]: I0121 11:09:09.827738 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-h2ttp" podStartSLOduration=1.43672156 podStartE2EDuration="5.827702114s" podCreationTimestamp="2026-01-21 11:09:04 +0000 UTC" firstStartedPulling="2026-01-21 11:09:05.069529166 +0000 UTC m=+732.329485635" lastFinishedPulling="2026-01-21 11:09:09.46050972 +0000 UTC m=+736.720466189" observedRunningTime="2026-01-21 11:09:09.821105529 +0000 UTC m=+737.081062028" watchObservedRunningTime="2026-01-21 11:09:09.827702114 +0000 UTC m=+737.087658593" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.278512 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bx64f"] Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279177 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-controller" containerID="cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279600 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="sbdb" containerID="cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279647 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="nbdb" containerID="cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279693 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="northd" containerID="cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279743 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279810 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-node" containerID="cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279870 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-acl-logging" containerID="cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.335448 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" containerID="cri-o://d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.812971 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/1.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.813426 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/0.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.813537 4881 generic.go:334] "Generic (PLEG): container finished" podID="09da9e14-f6d5-4346-a4a0-c17711e3b603" containerID="e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b" exitCode=2 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.813583 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerDied","Data":"e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.813764 4881 scope.go:117] "RemoveContainer" containerID="821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.814414 4881 scope.go:117] "RemoveContainer" containerID="e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.819025 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/2.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.822564 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-acl-logging/0.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823090 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-controller/0.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823620 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823651 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823661 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823657 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823705 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823719 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823728 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823672 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823754 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823766 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823772 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb" exitCode=143 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823812 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823831 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e" exitCode=143 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823849 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823863 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823877 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.848699 4881 scope.go:117] "RemoveContainer" containerID="ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.973209 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-acl-logging/0.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.974103 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-controller/0.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.974649 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040241 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6zplb"] Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040492 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040509 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040521 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="nbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040527 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="nbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040533 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040541 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040551 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-node" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040558 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-node" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040566 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="northd" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040574 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="northd" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040583 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040589 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040596 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="sbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040602 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="sbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040608 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040614 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040620 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kubecfg-setup" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040626 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kubecfg-setup" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040639 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-acl-logging" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040644 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-acl-logging" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040657 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040663 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040670 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040675 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040763 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040775 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040805 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-node" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040814 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="sbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040824 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040834 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040844 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-acl-logging" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040851 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="nbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040863 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040870 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="northd" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.041070 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.044540 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106191 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106252 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106278 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106309 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106320 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106331 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106357 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106402 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106417 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106477 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106489 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106516 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106523 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106544 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106568 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106591 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106621 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106639 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106674 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106729 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106753 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106772 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106826 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106846 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106863 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106926 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106962 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash" (OuterVolumeSpecName: "host-slash") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106985 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107008 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107029 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107044 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-netd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107087 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-node-log\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-ovn\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107192 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-script-lib\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107217 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107240 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovn-node-metrics-cert\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107264 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-env-overrides\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107288 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-systemd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107307 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-var-lib-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107331 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107355 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmjg\" (UniqueName: \"kubernetes.io/projected/a91a67db-c0f5-4c55-8e84-bea013d635d8-kube-api-access-wgmjg\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107378 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-slash\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107408 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-kubelet\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107439 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-systemd-units\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107461 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-netns\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107508 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-etc-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107538 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-bin\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107567 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-config\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107588 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-log-socket\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107612 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107671 4881 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107685 4881 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107699 4881 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107731 4881 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107743 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107754 4881 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107765 4881 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107775 4881 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107801 4881 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107813 4881 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107050 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107505 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket" (OuterVolumeSpecName: "log-socket") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107520 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107929 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107956 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107975 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log" (OuterVolumeSpecName: "node-log") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107941 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.112447 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb" (OuterVolumeSpecName: "kube-api-access-kz6fb") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "kube-api-access-kz6fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.112645 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.120326 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209404 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-kubelet\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209482 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-systemd-units\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-netns\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209522 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-etc-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209538 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-bin\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209532 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-kubelet\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209603 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-netns\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209614 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-etc-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209666 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-systemd-units\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209635 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-bin\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209571 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-config\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209703 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-log-socket\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209719 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209734 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-netd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209760 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-node-log\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209798 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-ovn\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209828 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-script-lib\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209846 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209861 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovn-node-metrics-cert\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209879 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-env-overrides\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209896 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-systemd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209911 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-var-lib-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210075 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgmjg\" (UniqueName: \"kubernetes.io/projected/a91a67db-c0f5-4c55-8e84-bea013d635d8-kube-api-access-wgmjg\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210095 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-slash\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210139 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210149 4881 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210158 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210169 4881 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210178 4881 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210188 4881 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210198 4881 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210206 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210214 4881 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210222 4881 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210248 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-slash\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210269 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-log-socket\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210289 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210309 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-netd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210329 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-node-log\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210328 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-config\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210366 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-systemd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210386 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-var-lib-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210408 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210705 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-env-overrides\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210754 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-ovn\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.211316 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-script-lib\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.211361 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.215457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovn-node-metrics-cert\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.227427 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgmjg\" (UniqueName: \"kubernetes.io/projected/a91a67db-c0f5-4c55-8e84-bea013d635d8-kube-api-access-wgmjg\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.364960 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: W0121 11:09:14.386827 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda91a67db_c0f5_4c55_8e84_bea013d635d8.slice/crio-0e012d8de72d92869fde4655c9c49b3e09d459cb824deef01a7961522e4e160e WatchSource:0}: Error finding container 0e012d8de72d92869fde4655c9c49b3e09d459cb824deef01a7961522e4e160e: Status 404 returned error can't find the container with id 0e012d8de72d92869fde4655c9c49b3e09d459cb824deef01a7961522e4e160e Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.623923 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.834421 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/1.log" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.834501 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerStarted","Data":"fb9e5e2cf8dadd445787c765b905521bee2d9a16e6fce0aac52c49f34c828713"} Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.841584 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-acl-logging/0.log" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.842051 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-controller/0.log" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.842528 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"a06b3458bc6abd92816719b2c657b7e45cd4d79bda9753bf86e22c8e99a3027c"} Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.842605 4881 scope.go:117] "RemoveContainer" containerID="d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.842551 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.844301 4881 generic.go:334] "Generic (PLEG): container finished" podID="a91a67db-c0f5-4c55-8e84-bea013d635d8" containerID="d3e8393a708912b620f5e14e2013c207e4959dc41b6e81113d0c0ac8a1a442a0" exitCode=0 Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.844335 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerDied","Data":"d3e8393a708912b620f5e14e2013c207e4959dc41b6e81113d0c0ac8a1a442a0"} Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.844359 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"0e012d8de72d92869fde4655c9c49b3e09d459cb824deef01a7961522e4e160e"} Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.876589 4881 scope.go:117] "RemoveContainer" containerID="47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.905586 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bx64f"] Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.913879 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bx64f"] Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.917280 4881 scope.go:117] "RemoveContainer" containerID="9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.940757 4881 scope.go:117] "RemoveContainer" containerID="f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.970649 4881 scope.go:117] "RemoveContainer" containerID="599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.990985 4881 scope.go:117] "RemoveContainer" containerID="e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045" Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.010811 4881 scope.go:117] "RemoveContainer" containerID="b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb" Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.032033 4881 scope.go:117] "RemoveContainer" containerID="d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e" Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.053654 4881 scope.go:117] "RemoveContainer" containerID="db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd" Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.318541 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" path="/var/lib/kubelet/pods/e8bb6d97-b3b8-4e31-b704-8e565385ab26/volumes" Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856242 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"a4725048a64c4e17e4af56b9f0f6b04b5a55ef0c14f491c09e2fe39c6be0318d"} Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856317 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"a478e4a8979018f2acec5b4287a08c99b39860a144ac2ddd45e87a9e040109f1"} Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856342 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"e8c6c84126fdb3b2719f792c2385e8724a341e3996df3de0d5f86a747404a3d3"} Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856357 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"2144d25eae82c27e114599e7589d6e03f970d068cef8cc80ff9b650beba5440c"} Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856368 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"9d8b2814d009b89a7b9c947e01341e1d6bf0ba6feb3289ba739ecbc7d693a99a"} Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856379 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"7024df4a849a9c3072244b84f5effd45379ecc7d07d0dd890f4a027255244eed"} Jan 21 11:09:18 crc kubenswrapper[4881]: I0121 11:09:18.885444 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"7a69d816f1d1650253dc14e4afaa0acc554cf2e4aae031e84fc8be1626d15637"} Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.913312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"5685e4bcc7bbdf6712541b8ca39fd0d9d2d1d34c28cdeca8299f5c2650fb05c0"} Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.915199 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.915241 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.915301 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.946204 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.949568 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" podStartSLOduration=7.949541963 podStartE2EDuration="7.949541963s" podCreationTimestamp="2026-01-21 11:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:09:21.94813276 +0000 UTC m=+749.208089229" watchObservedRunningTime="2026-01-21 11:09:21.949541963 +0000 UTC m=+749.209498432" Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.951773 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:44 crc kubenswrapper[4881]: I0121 11:09:44.436595 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.419311 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x"] Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.420963 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.423729 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.436585 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x"] Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.563197 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.563699 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gftjk\" (UniqueName: \"kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.564070 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.666081 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.666127 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gftjk\" (UniqueName: \"kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.666174 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.666775 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.666806 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.695923 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gftjk\" (UniqueName: \"kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.748595 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:48 crc kubenswrapper[4881]: I0121 11:09:48.177114 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x"] Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.088763 4881 generic.go:334] "Generic (PLEG): container finished" podID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerID="c2a56a521d759800c9653b77ec0ef19cc98db2ff50ec2ac953c6bdf463eef3f0" exitCode=0 Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.088983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" event={"ID":"31ed4736-a43c-4891-aeb4-e09d573a30b3","Type":"ContainerDied","Data":"c2a56a521d759800c9653b77ec0ef19cc98db2ff50ec2ac953c6bdf463eef3f0"} Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.089114 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" event={"ID":"31ed4736-a43c-4891-aeb4-e09d573a30b3","Type":"ContainerStarted","Data":"59eca7aeecaa5488e578bb8d01ce90db7f1786d13aa2b2c8774bd4b63d6ef339"} Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.757470 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.758731 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.774329 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.898952 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.899008 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f784f\" (UniqueName: \"kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.899036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.000710 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.000802 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f784f\" (UniqueName: \"kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.000894 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.001426 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.001485 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.033405 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f784f\" (UniqueName: \"kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.087980 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.512195 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:09:50 crc kubenswrapper[4881]: W0121 11:09:50.522587 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b0b6a69_9749_44d9_a00e_1e2ab801ffb5.slice/crio-98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf WatchSource:0}: Error finding container 98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf: Status 404 returned error can't find the container with id 98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf Jan 21 11:09:51 crc kubenswrapper[4881]: I0121 11:09:51.104091 4881 generic.go:334] "Generic (PLEG): container finished" podID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerID="feaf7a7c35393a3016bf0e0da39270751fc90da64abf56d09a63cf394acffd6d" exitCode=0 Jan 21 11:09:51 crc kubenswrapper[4881]: I0121 11:09:51.104148 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" event={"ID":"31ed4736-a43c-4891-aeb4-e09d573a30b3","Type":"ContainerDied","Data":"feaf7a7c35393a3016bf0e0da39270751fc90da64abf56d09a63cf394acffd6d"} Jan 21 11:09:51 crc kubenswrapper[4881]: I0121 11:09:51.106013 4881 generic.go:334] "Generic (PLEG): container finished" podID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerID="8d96b6ac2acd440f7e60cdd073c30593c6e0c4417e979419134016d123abd969" exitCode=0 Jan 21 11:09:51 crc kubenswrapper[4881]: I0121 11:09:51.106049 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerDied","Data":"8d96b6ac2acd440f7e60cdd073c30593c6e0c4417e979419134016d123abd969"} Jan 21 11:09:51 crc kubenswrapper[4881]: I0121 11:09:51.106069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerStarted","Data":"98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf"} Jan 21 11:09:52 crc kubenswrapper[4881]: I0121 11:09:52.123827 4881 generic.go:334] "Generic (PLEG): container finished" podID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerID="ed22a5764e1b97078db6eeb1512ee4dbaf13083258d1f179d89e99f7e3bdd2d4" exitCode=0 Jan 21 11:09:52 crc kubenswrapper[4881]: I0121 11:09:52.123900 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" event={"ID":"31ed4736-a43c-4891-aeb4-e09d573a30b3","Type":"ContainerDied","Data":"ed22a5764e1b97078db6eeb1512ee4dbaf13083258d1f179d89e99f7e3bdd2d4"} Jan 21 11:09:52 crc kubenswrapper[4881]: I0121 11:09:52.127591 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerStarted","Data":"6c72489f579e659d3691891984c6b73c6e38f55451044ec4d36e63d9b6a30869"} Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.731829 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.856507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util\") pod \"31ed4736-a43c-4891-aeb4-e09d573a30b3\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.856681 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle\") pod \"31ed4736-a43c-4891-aeb4-e09d573a30b3\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.856832 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gftjk\" (UniqueName: \"kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk\") pod \"31ed4736-a43c-4891-aeb4-e09d573a30b3\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.861299 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle" (OuterVolumeSpecName: "bundle") pod "31ed4736-a43c-4891-aeb4-e09d573a30b3" (UID: "31ed4736-a43c-4891-aeb4-e09d573a30b3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.873602 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util" (OuterVolumeSpecName: "util") pod "31ed4736-a43c-4891-aeb4-e09d573a30b3" (UID: "31ed4736-a43c-4891-aeb4-e09d573a30b3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.912770 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk" (OuterVolumeSpecName: "kube-api-access-gftjk") pod "31ed4736-a43c-4891-aeb4-e09d573a30b3" (UID: "31ed4736-a43c-4891-aeb4-e09d573a30b3"). InnerVolumeSpecName "kube-api-access-gftjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.958989 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gftjk\" (UniqueName: \"kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.959032 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.959041 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:54 crc kubenswrapper[4881]: I0121 11:09:54.243807 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" event={"ID":"31ed4736-a43c-4891-aeb4-e09d573a30b3","Type":"ContainerDied","Data":"59eca7aeecaa5488e578bb8d01ce90db7f1786d13aa2b2c8774bd4b63d6ef339"} Jan 21 11:09:54 crc kubenswrapper[4881]: I0121 11:09:54.243858 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59eca7aeecaa5488e578bb8d01ce90db7f1786d13aa2b2c8774bd4b63d6ef339" Jan 21 11:09:54 crc kubenswrapper[4881]: I0121 11:09:54.243953 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:55 crc kubenswrapper[4881]: I0121 11:09:55.266742 4881 generic.go:334] "Generic (PLEG): container finished" podID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerID="6c72489f579e659d3691891984c6b73c6e38f55451044ec4d36e63d9b6a30869" exitCode=0 Jan 21 11:09:55 crc kubenswrapper[4881]: I0121 11:09:55.266857 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerDied","Data":"6c72489f579e659d3691891984c6b73c6e38f55451044ec4d36e63d9b6a30869"} Jan 21 11:09:56 crc kubenswrapper[4881]: I0121 11:09:56.277833 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerStarted","Data":"caff78396a524a2b7173fa89076846a700461a26e3edd64b51c4f8b958b5c232"} Jan 21 11:09:56 crc kubenswrapper[4881]: I0121 11:09:56.302984 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qcdp7" podStartSLOduration=2.674724336 podStartE2EDuration="7.302964407s" podCreationTimestamp="2026-01-21 11:09:49 +0000 UTC" firstStartedPulling="2026-01-21 11:09:51.107644157 +0000 UTC m=+778.367600616" lastFinishedPulling="2026-01-21 11:09:55.735884218 +0000 UTC m=+782.995840687" observedRunningTime="2026-01-21 11:09:56.301032049 +0000 UTC m=+783.560988518" watchObservedRunningTime="2026-01-21 11:09:56.302964407 +0000 UTC m=+783.562920876" Jan 21 11:09:59 crc kubenswrapper[4881]: I0121 11:09:59.850994 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:09:59 crc kubenswrapper[4881]: I0121 11:09:59.852349 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:10:00 crc kubenswrapper[4881]: I0121 11:10:00.088375 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:00 crc kubenswrapper[4881]: I0121 11:10:00.089622 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:01 crc kubenswrapper[4881]: I0121 11:10:01.361090 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qcdp7" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="registry-server" probeResult="failure" output=< Jan 21 11:10:01 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:10:01 crc kubenswrapper[4881]: > Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.620205 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p"] Jan 21 11:10:06 crc kubenswrapper[4881]: E0121 11:10:06.621597 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="extract" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.621676 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="extract" Jan 21 11:10:06 crc kubenswrapper[4881]: E0121 11:10:06.621738 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="pull" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.621809 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="pull" Jan 21 11:10:06 crc kubenswrapper[4881]: E0121 11:10:06.621873 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="util" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.621930 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="util" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.622108 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="extract" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.622638 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.627443 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-nmb98" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.627541 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.627454 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.634667 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p"] Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.681305 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqxv\" (UniqueName: \"kubernetes.io/projected/999c36a2-9f08-4da1-b14a-859ac888ae38-kube-api-access-rlqxv\") pod \"obo-prometheus-operator-68bc856cb9-rp92p\" (UID: \"999c36a2-9f08-4da1-b14a-859ac888ae38\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.782723 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlqxv\" (UniqueName: \"kubernetes.io/projected/999c36a2-9f08-4da1-b14a-859ac888ae38-kube-api-access-rlqxv\") pod \"obo-prometheus-operator-68bc856cb9-rp92p\" (UID: \"999c36a2-9f08-4da1-b14a-859ac888ae38\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.793419 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb"] Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.794748 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.799253 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.800609 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-kbbml" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.817871 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlqxv\" (UniqueName: \"kubernetes.io/projected/999c36a2-9f08-4da1-b14a-859ac888ae38-kube-api-access-rlqxv\") pod \"obo-prometheus-operator-68bc856cb9-rp92p\" (UID: \"999c36a2-9f08-4da1-b14a-859ac888ae38\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.822518 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg"] Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.823466 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.827942 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb"] Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.870095 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg"] Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.884897 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.885373 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.885516 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.885648 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.945886 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.987298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.987373 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.987421 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.987472 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:06.999841 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.002267 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.012389 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.017288 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.043216 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tfzsc"] Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.044572 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.047748 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.050978 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rj78c" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.090548 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tfzsc"] Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.118201 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.150104 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.190552 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/19be64a6-6795-4219-8d58-47f744ef8e13-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.191016 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtnd9\" (UniqueName: \"kubernetes.io/projected/19be64a6-6795-4219-8d58-47f744ef8e13-kube-api-access-vtnd9\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.248733 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6srxm"] Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.250756 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.255589 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-65rjm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.267068 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6srxm"] Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.293150 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtnd9\" (UniqueName: \"kubernetes.io/projected/19be64a6-6795-4219-8d58-47f744ef8e13-kube-api-access-vtnd9\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.293231 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/19be64a6-6795-4219-8d58-47f744ef8e13-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.293331 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcj9n\" (UniqueName: \"kubernetes.io/projected/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-kube-api-access-gcj9n\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.293374 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.299516 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/19be64a6-6795-4219-8d58-47f744ef8e13-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.345555 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtnd9\" (UniqueName: \"kubernetes.io/projected/19be64a6-6795-4219-8d58-47f744ef8e13-kube-api-access-vtnd9\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.363972 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.395028 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.395238 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcj9n\" (UniqueName: \"kubernetes.io/projected/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-kube-api-access-gcj9n\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.398426 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.426547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcj9n\" (UniqueName: \"kubernetes.io/projected/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-kube-api-access-gcj9n\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.634368 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:08 crc kubenswrapper[4881]: I0121 11:10:08.609658 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p"] Jan 21 11:10:08 crc kubenswrapper[4881]: I0121 11:10:08.766911 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6srxm"] Jan 21 11:10:08 crc kubenswrapper[4881]: W0121 11:10:08.768919 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cfbfa78_5e7c_4a57_9d98_e11fb36d0f50.slice/crio-29d6e582a45a89f893e70dc747c3d30492687e38b9e2a00344cf54adb1b12764 WatchSource:0}: Error finding container 29d6e582a45a89f893e70dc747c3d30492687e38b9e2a00344cf54adb1b12764: Status 404 returned error can't find the container with id 29d6e582a45a89f893e70dc747c3d30492687e38b9e2a00344cf54adb1b12764 Jan 21 11:10:08 crc kubenswrapper[4881]: I0121 11:10:08.835377 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tfzsc"] Jan 21 11:10:08 crc kubenswrapper[4881]: I0121 11:10:08.927799 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg"] Jan 21 11:10:08 crc kubenswrapper[4881]: I0121 11:10:08.970397 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb"] Jan 21 11:10:09 crc kubenswrapper[4881]: I0121 11:10:09.701934 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" event={"ID":"952218f5-7dfc-40d5-a1df-2c462e1e4dcc","Type":"ContainerStarted","Data":"ca1991c1fe099cb2d669d1556a3f32de2ee53253fe42bbb64fdbee0199a2c8cf"} Jan 21 11:10:09 crc kubenswrapper[4881]: I0121 11:10:09.703595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" event={"ID":"c2181303-fd96-43e5-b6f2-158cca65c0b4","Type":"ContainerStarted","Data":"7fcc611037f50df47e76edc764dfdfd5cfaedff64681ab53d2c9269b4961e76c"} Jan 21 11:10:09 crc kubenswrapper[4881]: I0121 11:10:09.704579 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" event={"ID":"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50","Type":"ContainerStarted","Data":"29d6e582a45a89f893e70dc747c3d30492687e38b9e2a00344cf54adb1b12764"} Jan 21 11:10:09 crc kubenswrapper[4881]: I0121 11:10:09.705417 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" event={"ID":"999c36a2-9f08-4da1-b14a-859ac888ae38","Type":"ContainerStarted","Data":"65f830bd8d0ac124324c1d731cb461efd52d6fdf91617bb3de2eed67af920956"} Jan 21 11:10:09 crc kubenswrapper[4881]: I0121 11:10:09.706249 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" event={"ID":"19be64a6-6795-4219-8d58-47f744ef8e13","Type":"ContainerStarted","Data":"09322c086c2445daa49e5e3bca74eeb493a75c74b89b9522118b07ac62da1250"} Jan 21 11:10:10 crc kubenswrapper[4881]: I0121 11:10:10.176610 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:10 crc kubenswrapper[4881]: I0121 11:10:10.262714 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:10 crc kubenswrapper[4881]: I0121 11:10:10.425129 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:10:11 crc kubenswrapper[4881]: I0121 11:10:11.720530 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qcdp7" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="registry-server" containerID="cri-o://caff78396a524a2b7173fa89076846a700461a26e3edd64b51c4f8b958b5c232" gracePeriod=2 Jan 21 11:10:12 crc kubenswrapper[4881]: I0121 11:10:12.796630 4881 generic.go:334] "Generic (PLEG): container finished" podID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerID="caff78396a524a2b7173fa89076846a700461a26e3edd64b51c4f8b958b5c232" exitCode=0 Jan 21 11:10:12 crc kubenswrapper[4881]: I0121 11:10:12.796906 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerDied","Data":"caff78396a524a2b7173fa89076846a700461a26e3edd64b51c4f8b958b5c232"} Jan 21 11:10:13 crc kubenswrapper[4881]: I0121 11:10:13.820292 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerDied","Data":"98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf"} Jan 21 11:10:13 crc kubenswrapper[4881]: I0121 11:10:13.820351 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf" Jan 21 11:10:13 crc kubenswrapper[4881]: I0121 11:10:13.905027 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.062883 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities\") pod \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.062951 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content\") pod \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.063100 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f784f\" (UniqueName: \"kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f\") pod \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.066048 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities" (OuterVolumeSpecName: "utilities") pod "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" (UID: "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.072345 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f" (OuterVolumeSpecName: "kube-api-access-f784f") pod "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" (UID: "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5"). InnerVolumeSpecName "kube-api-access-f784f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.167644 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f784f\" (UniqueName: \"kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f\") on node \"crc\" DevicePath \"\"" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.167686 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.237142 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" (UID: "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.269860 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.825606 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.870538 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.877187 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:10:15 crc kubenswrapper[4881]: I0121 11:10:15.321711 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" path="/var/lib/kubelet/pods/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5/volumes" Jan 21 11:10:26 crc kubenswrapper[4881]: E0121 11:10:26.634847 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" Jan 21 11:10:26 crc kubenswrapper[4881]: E0121 11:10:26.635596 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcj9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5bf474d74f-6srxm_openshift-operators(1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 11:10:26 crc kubenswrapper[4881]: E0121 11:10:26.637045 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" podUID="1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50" Jan 21 11:10:27 crc kubenswrapper[4881]: E0121 11:10:27.007535 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8\\\"\"" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" podUID="1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50" Jan 21 11:10:28 crc kubenswrapper[4881]: E0121 11:10:28.222111 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Jan 21 11:10:28 crc kubenswrapper[4881]: E0121 11:10:28.222509 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rlqxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-rp92p_openshift-operators(999c36a2-9f08-4da1-b14a-859ac888ae38): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 11:10:28 crc kubenswrapper[4881]: E0121 11:10:28.224043 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" podUID="999c36a2-9f08-4da1-b14a-859ac888ae38" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.022993 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" event={"ID":"19be64a6-6795-4219-8d58-47f744ef8e13","Type":"ContainerStarted","Data":"17e0c2d07ce4246619e0344f14e7c92d918936d15766cb45bda2f876e228395c"} Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.023212 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.025054 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" event={"ID":"952218f5-7dfc-40d5-a1df-2c462e1e4dcc","Type":"ContainerStarted","Data":"8c7ccbb502e1aab769bdb56a7cbe8b6a680233a33d735a487cdf56a0358129e3"} Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.025349 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.028452 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" event={"ID":"c2181303-fd96-43e5-b6f2-158cca65c0b4","Type":"ContainerStarted","Data":"981ce77d2d713b29004c7b615571658e3dfc3bc52d20c3d79bc9e6731e0fc0ca"} Jan 21 11:10:29 crc kubenswrapper[4881]: E0121 11:10:29.030290 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" podUID="999c36a2-9f08-4da1-b14a-859ac888ae38" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.051015 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" podStartSLOduration=3.586385187 podStartE2EDuration="23.050989442s" podCreationTimestamp="2026-01-21 11:10:06 +0000 UTC" firstStartedPulling="2026-01-21 11:10:08.867449957 +0000 UTC m=+796.127406426" lastFinishedPulling="2026-01-21 11:10:28.332054212 +0000 UTC m=+815.592010681" observedRunningTime="2026-01-21 11:10:29.047915896 +0000 UTC m=+816.307872375" watchObservedRunningTime="2026-01-21 11:10:29.050989442 +0000 UTC m=+816.310945911" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.078155 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" podStartSLOduration=3.774144918 podStartE2EDuration="23.078130737s" podCreationTimestamp="2026-01-21 11:10:06 +0000 UTC" firstStartedPulling="2026-01-21 11:10:09.011272392 +0000 UTC m=+796.271228861" lastFinishedPulling="2026-01-21 11:10:28.315258211 +0000 UTC m=+815.575214680" observedRunningTime="2026-01-21 11:10:29.075490872 +0000 UTC m=+816.335447351" watchObservedRunningTime="2026-01-21 11:10:29.078130737 +0000 UTC m=+816.338087206" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.131045 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" podStartSLOduration=3.818455015 podStartE2EDuration="23.131020683s" podCreationTimestamp="2026-01-21 11:10:06 +0000 UTC" firstStartedPulling="2026-01-21 11:10:08.988673229 +0000 UTC m=+796.248629698" lastFinishedPulling="2026-01-21 11:10:28.301238897 +0000 UTC m=+815.561195366" observedRunningTime="2026-01-21 11:10:29.12603403 +0000 UTC m=+816.385990499" watchObservedRunningTime="2026-01-21 11:10:29.131020683 +0000 UTC m=+816.390977152" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.850831 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.850901 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:10:42 crc kubenswrapper[4881]: I0121 11:10:42.145678 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" event={"ID":"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50","Type":"ContainerStarted","Data":"7a0d646d4e071851d7ae6efc7bb55b00951ba41c92e4ee17fd7b4e1ccbaa52ce"} Jan 21 11:10:42 crc kubenswrapper[4881]: I0121 11:10:42.147137 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:42 crc kubenswrapper[4881]: I0121 11:10:42.165342 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" podStartSLOduration=3.022655099 podStartE2EDuration="35.165326859s" podCreationTimestamp="2026-01-21 11:10:07 +0000 UTC" firstStartedPulling="2026-01-21 11:10:08.774218483 +0000 UTC m=+796.034174952" lastFinishedPulling="2026-01-21 11:10:40.916890223 +0000 UTC m=+828.176846712" observedRunningTime="2026-01-21 11:10:42.161247709 +0000 UTC m=+829.421204178" watchObservedRunningTime="2026-01-21 11:10:42.165326859 +0000 UTC m=+829.425283328" Jan 21 11:10:43 crc kubenswrapper[4881]: I0121 11:10:43.152225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" event={"ID":"999c36a2-9f08-4da1-b14a-859ac888ae38","Type":"ContainerStarted","Data":"4ce79473caabd6c07995b3e5afa25c90af88575ae15a49fe39ef109530a02b1e"} Jan 21 11:10:43 crc kubenswrapper[4881]: I0121 11:10:43.172680 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" podStartSLOduration=3.83139522 podStartE2EDuration="37.172653075s" podCreationTimestamp="2026-01-21 11:10:06 +0000 UTC" firstStartedPulling="2026-01-21 11:10:08.642072454 +0000 UTC m=+795.902028923" lastFinishedPulling="2026-01-21 11:10:41.983330309 +0000 UTC m=+829.243286778" observedRunningTime="2026-01-21 11:10:43.167179402 +0000 UTC m=+830.427135901" watchObservedRunningTime="2026-01-21 11:10:43.172653075 +0000 UTC m=+830.432609564" Jan 21 11:10:47 crc kubenswrapper[4881]: I0121 11:10:47.637382 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:59 crc kubenswrapper[4881]: I0121 11:10:59.850494 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:10:59 crc kubenswrapper[4881]: I0121 11:10:59.850975 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:10:59 crc kubenswrapper[4881]: I0121 11:10:59.851027 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:10:59 crc kubenswrapper[4881]: I0121 11:10:59.851730 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:10:59 crc kubenswrapper[4881]: I0121 11:10:59.851808 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16" gracePeriod=600 Jan 21 11:11:01 crc kubenswrapper[4881]: I0121 11:11:01.272180 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16" exitCode=0 Jan 21 11:11:01 crc kubenswrapper[4881]: I0121 11:11:01.272215 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16"} Jan 21 11:11:01 crc kubenswrapper[4881]: I0121 11:11:01.272583 4881 scope.go:117] "RemoveContainer" containerID="51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b" Jan 21 11:11:02 crc kubenswrapper[4881]: I0121 11:11:02.282020 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5"} Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.400689 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq"] Jan 21 11:11:08 crc kubenswrapper[4881]: E0121 11:11:08.401643 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="extract-utilities" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.401659 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="extract-utilities" Jan 21 11:11:08 crc kubenswrapper[4881]: E0121 11:11:08.401672 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="extract-content" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.401677 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="extract-content" Jan 21 11:11:08 crc kubenswrapper[4881]: E0121 11:11:08.401695 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="registry-server" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.401702 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="registry-server" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.401817 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="registry-server" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.402625 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.404621 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.414963 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq"] Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.497163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.497254 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69mwj\" (UniqueName: \"kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.497394 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.599452 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69mwj\" (UniqueName: \"kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.599535 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.599640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.600292 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.600388 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.620208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69mwj\" (UniqueName: \"kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.719166 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:09 crc kubenswrapper[4881]: I0121 11:11:09.014893 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq"] Jan 21 11:11:09 crc kubenswrapper[4881]: I0121 11:11:09.328397 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" event={"ID":"1bb22c78-c1fd-422e-900a-52c4b73fb451","Type":"ContainerStarted","Data":"40bd3a7c64e9ea2a8dc049ad18ecc00565b1a2d412a0f6424dbd722f44e55c77"} Jan 21 11:11:10 crc kubenswrapper[4881]: I0121 11:11:10.337130 4881 generic.go:334] "Generic (PLEG): container finished" podID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerID="c9ed00009e2a833f1d6678a36314637e6447458f2b1a304bf57edb500bc4e94f" exitCode=0 Jan 21 11:11:10 crc kubenswrapper[4881]: I0121 11:11:10.337216 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" event={"ID":"1bb22c78-c1fd-422e-900a-52c4b73fb451","Type":"ContainerDied","Data":"c9ed00009e2a833f1d6678a36314637e6447458f2b1a304bf57edb500bc4e94f"} Jan 21 11:11:12 crc kubenswrapper[4881]: I0121 11:11:12.352125 4881 generic.go:334] "Generic (PLEG): container finished" podID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerID="abaae8f4635dfc8073c654713ae4fd8459a0ed4d66141b1f6aaf0e2395aa0f08" exitCode=0 Jan 21 11:11:12 crc kubenswrapper[4881]: I0121 11:11:12.352260 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" event={"ID":"1bb22c78-c1fd-422e-900a-52c4b73fb451","Type":"ContainerDied","Data":"abaae8f4635dfc8073c654713ae4fd8459a0ed4d66141b1f6aaf0e2395aa0f08"} Jan 21 11:11:13 crc kubenswrapper[4881]: I0121 11:11:13.363056 4881 generic.go:334] "Generic (PLEG): container finished" podID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerID="bf4e152e561f858eb56118ae54e7090f18e80d7b4252fb965ebd4fb6a084de56" exitCode=0 Jan 21 11:11:13 crc kubenswrapper[4881]: I0121 11:11:13.363113 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" event={"ID":"1bb22c78-c1fd-422e-900a-52c4b73fb451","Type":"ContainerDied","Data":"bf4e152e561f858eb56118ae54e7090f18e80d7b4252fb965ebd4fb6a084de56"} Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.746914 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.892392 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle\") pod \"1bb22c78-c1fd-422e-900a-52c4b73fb451\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.892457 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util\") pod \"1bb22c78-c1fd-422e-900a-52c4b73fb451\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.892561 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69mwj\" (UniqueName: \"kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj\") pod \"1bb22c78-c1fd-422e-900a-52c4b73fb451\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.893095 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle" (OuterVolumeSpecName: "bundle") pod "1bb22c78-c1fd-422e-900a-52c4b73fb451" (UID: "1bb22c78-c1fd-422e-900a-52c4b73fb451"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.897669 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj" (OuterVolumeSpecName: "kube-api-access-69mwj") pod "1bb22c78-c1fd-422e-900a-52c4b73fb451" (UID: "1bb22c78-c1fd-422e-900a-52c4b73fb451"). InnerVolumeSpecName "kube-api-access-69mwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.907412 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util" (OuterVolumeSpecName: "util") pod "1bb22c78-c1fd-422e-900a-52c4b73fb451" (UID: "1bb22c78-c1fd-422e-900a-52c4b73fb451"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.994400 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69mwj\" (UniqueName: \"kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.994447 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.994461 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:15 crc kubenswrapper[4881]: I0121 11:11:15.378233 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" event={"ID":"1bb22c78-c1fd-422e-900a-52c4b73fb451","Type":"ContainerDied","Data":"40bd3a7c64e9ea2a8dc049ad18ecc00565b1a2d412a0f6424dbd722f44e55c77"} Jan 21 11:11:15 crc kubenswrapper[4881]: I0121 11:11:15.378287 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:15 crc kubenswrapper[4881]: I0121 11:11:15.378297 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40bd3a7c64e9ea2a8dc049ad18ecc00565b1a2d412a0f6424dbd722f44e55c77" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.948574 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zlxs9"] Jan 21 11:11:16 crc kubenswrapper[4881]: E0121 11:11:16.950720 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="util" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.950820 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="util" Jan 21 11:11:16 crc kubenswrapper[4881]: E0121 11:11:16.950900 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="pull" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.950954 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="pull" Jan 21 11:11:16 crc kubenswrapper[4881]: E0121 11:11:16.951067 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="extract" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.951118 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="extract" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.951309 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="extract" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.951982 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.958112 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.958271 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.958271 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tpcqs" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.967187 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zlxs9"] Jan 21 11:11:17 crc kubenswrapper[4881]: I0121 11:11:17.055006 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chnwc\" (UniqueName: \"kubernetes.io/projected/14878b0e-37cc-4c03-89df-ba23d94589a0-kube-api-access-chnwc\") pod \"nmstate-operator-646758c888-zlxs9\" (UID: \"14878b0e-37cc-4c03-89df-ba23d94589a0\") " pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" Jan 21 11:11:17 crc kubenswrapper[4881]: I0121 11:11:17.156759 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chnwc\" (UniqueName: \"kubernetes.io/projected/14878b0e-37cc-4c03-89df-ba23d94589a0-kube-api-access-chnwc\") pod \"nmstate-operator-646758c888-zlxs9\" (UID: \"14878b0e-37cc-4c03-89df-ba23d94589a0\") " pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" Jan 21 11:11:17 crc kubenswrapper[4881]: I0121 11:11:17.198918 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chnwc\" (UniqueName: \"kubernetes.io/projected/14878b0e-37cc-4c03-89df-ba23d94589a0-kube-api-access-chnwc\") pod \"nmstate-operator-646758c888-zlxs9\" (UID: \"14878b0e-37cc-4c03-89df-ba23d94589a0\") " pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" Jan 21 11:11:17 crc kubenswrapper[4881]: I0121 11:11:17.271565 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" Jan 21 11:11:17 crc kubenswrapper[4881]: I0121 11:11:17.553363 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zlxs9"] Jan 21 11:11:18 crc kubenswrapper[4881]: I0121 11:11:18.408437 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" event={"ID":"14878b0e-37cc-4c03-89df-ba23d94589a0","Type":"ContainerStarted","Data":"7ed46c79bb08a2c1612067064decb37ed8b04c6a79956da7192766e827f18ea7"} Jan 21 11:11:20 crc kubenswrapper[4881]: I0121 11:11:20.425860 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" event={"ID":"14878b0e-37cc-4c03-89df-ba23d94589a0","Type":"ContainerStarted","Data":"f6090be0fcc0b7c7a66c51f9657cad982b8158dbaa93ebaf2206d9ce9fc7fccf"} Jan 21 11:11:20 crc kubenswrapper[4881]: I0121 11:11:20.446748 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" podStartSLOduration=1.968076664 podStartE2EDuration="4.44673052s" podCreationTimestamp="2026-01-21 11:11:16 +0000 UTC" firstStartedPulling="2026-01-21 11:11:17.564111515 +0000 UTC m=+864.824067984" lastFinishedPulling="2026-01-21 11:11:20.042765371 +0000 UTC m=+867.302721840" observedRunningTime="2026-01-21 11:11:20.441229065 +0000 UTC m=+867.701185534" watchObservedRunningTime="2026-01-21 11:11:20.44673052 +0000 UTC m=+867.706686989" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.821171 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ft48b"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.822174 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.824115 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2flt2" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.838946 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ft48b"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.848282 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.849378 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.850983 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.861873 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.874478 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-b9rcw"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.876929 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.966512 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.967398 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.969085 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.971034 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zgb88" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.971260 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.985901 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc"] Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003578 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003658 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-ovs-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003708 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mrm\" (UniqueName: \"kubernetes.io/projected/5c705c83-efa0-436f-a0b5-9164dbb6b1df-kube-api-access-h7mrm\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003730 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-nmstate-lock\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003756 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54gh7\" (UniqueName: \"kubernetes.io/projected/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-kube-api-access-54gh7\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003779 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-dbus-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003822 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rdkr\" (UniqueName: \"kubernetes.io/projected/f68408aa-3450-42af-a6f8-b5260973f272-kube-api-access-7rdkr\") pod \"nmstate-metrics-54757c584b-ft48b\" (UID: \"f68408aa-3450-42af-a6f8-b5260973f272\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-ovs-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104674 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvtms\" (UniqueName: \"kubernetes.io/projected/fcdadd73-568f-4ae0-a7bb-9330b2feb835-kube-api-access-hvtms\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104710 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fcdadd73-568f-4ae0-a7bb-9330b2feb835-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104755 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcdadd73-568f-4ae0-a7bb-9330b2feb835-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104764 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-ovs-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104980 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7mrm\" (UniqueName: \"kubernetes.io/projected/5c705c83-efa0-436f-a0b5-9164dbb6b1df-kube-api-access-h7mrm\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105049 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-nmstate-lock\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105139 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-nmstate-lock\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105488 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54gh7\" (UniqueName: \"kubernetes.io/projected/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-kube-api-access-54gh7\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105538 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-dbus-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105590 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rdkr\" (UniqueName: \"kubernetes.io/projected/f68408aa-3450-42af-a6f8-b5260973f272-kube-api-access-7rdkr\") pod \"nmstate-metrics-54757c584b-ft48b\" (UID: \"f68408aa-3450-42af-a6f8-b5260973f272\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105666 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105912 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-dbus-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.119162 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.122732 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7mrm\" (UniqueName: \"kubernetes.io/projected/5c705c83-efa0-436f-a0b5-9164dbb6b1df-kube-api-access-h7mrm\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.122938 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rdkr\" (UniqueName: \"kubernetes.io/projected/f68408aa-3450-42af-a6f8-b5260973f272-kube-api-access-7rdkr\") pod \"nmstate-metrics-54757c584b-ft48b\" (UID: \"f68408aa-3450-42af-a6f8-b5260973f272\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.123231 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54gh7\" (UniqueName: \"kubernetes.io/projected/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-kube-api-access-54gh7\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.140773 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.164967 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.180310 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5948d4cb5-h9dr6"] Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.188999 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.201851 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.206516 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvtms\" (UniqueName: \"kubernetes.io/projected/fcdadd73-568f-4ae0-a7bb-9330b2feb835-kube-api-access-hvtms\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.206569 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fcdadd73-568f-4ae0-a7bb-9330b2feb835-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.206592 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcdadd73-568f-4ae0-a7bb-9330b2feb835-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.207671 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5948d4cb5-h9dr6"] Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.208913 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fcdadd73-568f-4ae0-a7bb-9330b2feb835-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.210220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcdadd73-568f-4ae0-a7bb-9330b2feb835-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.234508 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvtms\" (UniqueName: \"kubernetes.io/projected/fcdadd73-568f-4ae0-a7bb-9330b2feb835-kube-api-access-hvtms\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.289201 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: W0121 11:11:22.300678 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c705c83_efa0_436f_a0b5_9164dbb6b1df.slice/crio-91f221e0efb5cb7df51ed985fb369e978bfbe0e46f415631ebbcb58009bf1cea WatchSource:0}: Error finding container 91f221e0efb5cb7df51ed985fb369e978bfbe0e46f415631ebbcb58009bf1cea: Status 404 returned error can't find the container with id 91f221e0efb5cb7df51ed985fb369e978bfbe0e46f415631ebbcb58009bf1cea Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308149 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-oauth-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308257 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308299 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2j6p\" (UniqueName: \"kubernetes.io/projected/e9935505-550d-4eed-9bda-72ec999ff529-kube-api-access-x2j6p\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-console-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308373 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-oauth-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308405 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-service-ca\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308433 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-trusted-ca-bundle\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.411677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-oauth-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412022 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-service-ca\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412051 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-trusted-ca-bundle\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-oauth-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412095 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412126 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2j6p\" (UniqueName: \"kubernetes.io/projected/e9935505-550d-4eed-9bda-72ec999ff529-kube-api-access-x2j6p\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412158 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-console-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412886 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-oauth-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.414564 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-service-ca\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.416237 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-console-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.421004 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-trusted-ca-bundle\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.427625 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-oauth-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.431257 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.443852 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b9rcw" event={"ID":"5c705c83-efa0-436f-a0b5-9164dbb6b1df","Type":"ContainerStarted","Data":"91f221e0efb5cb7df51ed985fb369e978bfbe0e46f415631ebbcb58009bf1cea"} Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.473012 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2j6p\" (UniqueName: \"kubernetes.io/projected/e9935505-550d-4eed-9bda-72ec999ff529-kube-api-access-x2j6p\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.580399 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.603279 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ft48b"] Jan 21 11:11:22 crc kubenswrapper[4881]: W0121 11:11:22.604010 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf68408aa_3450_42af_a6f8_b5260973f272.slice/crio-75263f711a01743da2eda02df172618a1f70bf3f71d4552a680f3f08dba4b6d1 WatchSource:0}: Error finding container 75263f711a01743da2eda02df172618a1f70bf3f71d4552a680f3f08dba4b6d1: Status 404 returned error can't find the container with id 75263f711a01743da2eda02df172618a1f70bf3f71d4552a680f3f08dba4b6d1 Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.712137 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc"] Jan 21 11:11:22 crc kubenswrapper[4881]: W0121 11:11:22.720203 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcdadd73_568f_4ae0_a7bb_9330b2feb835.slice/crio-2f1c3a1b1622749132028b49619365e095b0384d7bf38678f2f951e18082dadb WatchSource:0}: Error finding container 2f1c3a1b1622749132028b49619365e095b0384d7bf38678f2f951e18082dadb: Status 404 returned error can't find the container with id 2f1c3a1b1622749132028b49619365e095b0384d7bf38678f2f951e18082dadb Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.826348 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5948d4cb5-h9dr6"] Jan 21 11:11:22 crc kubenswrapper[4881]: W0121 11:11:22.831660 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9935505_550d_4eed_9bda_72ec999ff529.slice/crio-497d3c7ee19f1d5d0fadd00346965ee64957bc419f14d1e5b93a9b9599deadf7 WatchSource:0}: Error finding container 497d3c7ee19f1d5d0fadd00346965ee64957bc419f14d1e5b93a9b9599deadf7: Status 404 returned error can't find the container with id 497d3c7ee19f1d5d0fadd00346965ee64957bc419f14d1e5b93a9b9599deadf7 Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.864950 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k"] Jan 21 11:11:22 crc kubenswrapper[4881]: W0121 11:11:22.872418 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6262b8c_2531_4008_9bb8_c3beeb66a3ed.slice/crio-3eadffafaaf64fa656cd418fb82245ba3b843b292288022e7307308625165420 WatchSource:0}: Error finding container 3eadffafaaf64fa656cd418fb82245ba3b843b292288022e7307308625165420: Status 404 returned error can't find the container with id 3eadffafaaf64fa656cd418fb82245ba3b843b292288022e7307308625165420 Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.454463 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" event={"ID":"fcdadd73-568f-4ae0-a7bb-9330b2feb835","Type":"ContainerStarted","Data":"2f1c3a1b1622749132028b49619365e095b0384d7bf38678f2f951e18082dadb"} Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.456261 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" event={"ID":"b6262b8c-2531-4008-9bb8-c3beeb66a3ed","Type":"ContainerStarted","Data":"3eadffafaaf64fa656cd418fb82245ba3b843b292288022e7307308625165420"} Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.458297 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5948d4cb5-h9dr6" event={"ID":"e9935505-550d-4eed-9bda-72ec999ff529","Type":"ContainerStarted","Data":"206fe5f53965f9042b6d06482e1063a81c74186cfba2d8c918d9f50cbcc3a46a"} Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.458428 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5948d4cb5-h9dr6" event={"ID":"e9935505-550d-4eed-9bda-72ec999ff529","Type":"ContainerStarted","Data":"497d3c7ee19f1d5d0fadd00346965ee64957bc419f14d1e5b93a9b9599deadf7"} Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.460564 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" event={"ID":"f68408aa-3450-42af-a6f8-b5260973f272","Type":"ContainerStarted","Data":"75263f711a01743da2eda02df172618a1f70bf3f71d4552a680f3f08dba4b6d1"} Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.476414 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5948d4cb5-h9dr6" podStartSLOduration=1.47638972 podStartE2EDuration="1.47638972s" podCreationTimestamp="2026-01-21 11:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:11:23.474993145 +0000 UTC m=+870.734949624" watchObservedRunningTime="2026-01-21 11:11:23.47638972 +0000 UTC m=+870.736346199" Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.552230 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" event={"ID":"fcdadd73-568f-4ae0-a7bb-9330b2feb835","Type":"ContainerStarted","Data":"262c061c6c6cee551071d338125204388b4e9ec2038d211196eb84e0c1b73988"} Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.554092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b9rcw" event={"ID":"5c705c83-efa0-436f-a0b5-9164dbb6b1df","Type":"ContainerStarted","Data":"ca467ceadddb4897cca8c993245e98b429120425d599d31934c93bd2c9009863"} Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.554148 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.555574 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" event={"ID":"b6262b8c-2531-4008-9bb8-c3beeb66a3ed","Type":"ContainerStarted","Data":"d414a51aeff912ae63db4bdd3d121d4297af4d1fb98e61e5d54ceef0eb082f61"} Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.556122 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.557959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" event={"ID":"f68408aa-3450-42af-a6f8-b5260973f272","Type":"ContainerStarted","Data":"2695bc9cb695c6c9736deb95547df532c72d2cbde492fc714f3bcb49af8077c8"} Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.572808 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" podStartSLOduration=2.134906912 podStartE2EDuration="6.572772851s" podCreationTimestamp="2026-01-21 11:11:21 +0000 UTC" firstStartedPulling="2026-01-21 11:11:22.722684529 +0000 UTC m=+869.982640988" lastFinishedPulling="2026-01-21 11:11:27.160550458 +0000 UTC m=+874.420506927" observedRunningTime="2026-01-21 11:11:27.567028581 +0000 UTC m=+874.826985050" watchObservedRunningTime="2026-01-21 11:11:27.572772851 +0000 UTC m=+874.832729320" Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.614778 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-b9rcw" podStartSLOduration=1.7590103400000001 podStartE2EDuration="6.614760081s" podCreationTimestamp="2026-01-21 11:11:21 +0000 UTC" firstStartedPulling="2026-01-21 11:11:22.306898609 +0000 UTC m=+869.566855078" lastFinishedPulling="2026-01-21 11:11:27.16264835 +0000 UTC m=+874.422604819" observedRunningTime="2026-01-21 11:11:27.588928187 +0000 UTC m=+874.848884656" watchObservedRunningTime="2026-01-21 11:11:27.614760081 +0000 UTC m=+874.874716550" Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.618708 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" podStartSLOduration=2.307992592 podStartE2EDuration="6.618694736s" podCreationTimestamp="2026-01-21 11:11:21 +0000 UTC" firstStartedPulling="2026-01-21 11:11:22.877307078 +0000 UTC m=+870.137263547" lastFinishedPulling="2026-01-21 11:11:27.188009222 +0000 UTC m=+874.447965691" observedRunningTime="2026-01-21 11:11:27.613053068 +0000 UTC m=+874.873009547" watchObservedRunningTime="2026-01-21 11:11:27.618694736 +0000 UTC m=+874.878651205" Jan 21 11:11:31 crc kubenswrapper[4881]: I0121 11:11:31.589379 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" event={"ID":"f68408aa-3450-42af-a6f8-b5260973f272","Type":"ContainerStarted","Data":"9fb0de26b7e3a70f0d133614bd136b283ec44db245b6c99779b899a0d4dae022"} Jan 21 11:11:31 crc kubenswrapper[4881]: I0121 11:11:31.611818 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" podStartSLOduration=2.813951692 podStartE2EDuration="10.611794906s" podCreationTimestamp="2026-01-21 11:11:21 +0000 UTC" firstStartedPulling="2026-01-21 11:11:22.6068887 +0000 UTC m=+869.866845169" lastFinishedPulling="2026-01-21 11:11:30.404731914 +0000 UTC m=+877.664688383" observedRunningTime="2026-01-21 11:11:31.605850621 +0000 UTC m=+878.865807120" watchObservedRunningTime="2026-01-21 11:11:31.611794906 +0000 UTC m=+878.871751375" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.226160 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.581914 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.582063 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.586876 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.599729 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.662460 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 11:11:42 crc kubenswrapper[4881]: I0121 11:11:42.171059 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:57 crc kubenswrapper[4881]: I0121 11:11:57.709342 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qxzd9" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" containerID="cri-o://8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47" gracePeriod=15 Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.159712 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qxzd9_bb8fc8b3-9818-40e2-afb2-860e2d1efae1/console/0.log" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.160226 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.263177 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.263657 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264003 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blg69\" (UniqueName: \"kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264043 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264240 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config" (OuterVolumeSpecName: "console-config") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264263 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca" (OuterVolumeSpecName: "service-ca") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264604 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264849 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.265061 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.265274 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.265520 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.266051 4881 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.266076 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.266088 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.266098 4881 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.271856 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69" (OuterVolumeSpecName: "kube-api-access-blg69") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "kube-api-access-blg69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.275369 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.278269 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.329335 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qxzd9_bb8fc8b3-9818-40e2-afb2-860e2d1efae1/console/0.log" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.330083 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerID="8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47" exitCode=2 Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.330131 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qxzd9" event={"ID":"bb8fc8b3-9818-40e2-afb2-860e2d1efae1","Type":"ContainerDied","Data":"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47"} Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.330168 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qxzd9" event={"ID":"bb8fc8b3-9818-40e2-afb2-860e2d1efae1","Type":"ContainerDied","Data":"d060bd9f87ed03936c0be9ee17418f9087722140490e6ad49375f3c789b2e023"} Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.330192 4881 scope.go:117] "RemoveContainer" containerID="8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.330360 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.365485 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.368010 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blg69\" (UniqueName: \"kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.368045 4881 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.368058 4881 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.369645 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.370827 4881 scope.go:117] "RemoveContainer" containerID="8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47" Jan 21 11:11:58 crc kubenswrapper[4881]: E0121 11:11:58.371407 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47\": container with ID starting with 8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47 not found: ID does not exist" containerID="8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.371467 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47"} err="failed to get container status \"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47\": rpc error: code = NotFound desc = could not find container \"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47\": container with ID starting with 8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47 not found: ID does not exist" Jan 21 11:11:59 crc kubenswrapper[4881]: I0121 11:11:59.329361 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" path="/var/lib/kubelet/pods/bb8fc8b3-9818-40e2-afb2-860e2d1efae1/volumes" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.525552 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:00 crc kubenswrapper[4881]: E0121 11:12:00.526869 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.526893 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.527100 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.529978 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.540765 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.607943 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcd7r\" (UniqueName: \"kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.608007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.608046 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.709400 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcd7r\" (UniqueName: \"kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.709975 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.710008 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.710547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.710829 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.734843 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcd7r\" (UniqueName: \"kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.851928 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:01 crc kubenswrapper[4881]: I0121 11:12:01.636672 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:02 crc kubenswrapper[4881]: I0121 11:12:02.364030 4881 generic.go:334] "Generic (PLEG): container finished" podID="9873ada5-628e-4b25-b739-4478cbe17296" containerID="62a4996a49fa7e70025c2e6c3982db1575edae9d0df4fbdfdba74d92ed4e5ed6" exitCode=0 Jan 21 11:12:02 crc kubenswrapper[4881]: I0121 11:12:02.364175 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerDied","Data":"62a4996a49fa7e70025c2e6c3982db1575edae9d0df4fbdfdba74d92ed4e5ed6"} Jan 21 11:12:02 crc kubenswrapper[4881]: I0121 11:12:02.364419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerStarted","Data":"03078ae198c32837e1314238531f4e1b4ba354a1768bb3f9c6c3700512d7bdc0"} Jan 21 11:12:03 crc kubenswrapper[4881]: I0121 11:12:03.373491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerStarted","Data":"4449fd6347c9d97dfe10f3a25b7f401eba4d4ff908fbcb0731b3a3e709b1d7fd"} Jan 21 11:12:05 crc kubenswrapper[4881]: I0121 11:12:05.411414 4881 generic.go:334] "Generic (PLEG): container finished" podID="9873ada5-628e-4b25-b739-4478cbe17296" containerID="4449fd6347c9d97dfe10f3a25b7f401eba4d4ff908fbcb0731b3a3e709b1d7fd" exitCode=0 Jan 21 11:12:05 crc kubenswrapper[4881]: I0121 11:12:05.411504 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerDied","Data":"4449fd6347c9d97dfe10f3a25b7f401eba4d4ff908fbcb0731b3a3e709b1d7fd"} Jan 21 11:12:06 crc kubenswrapper[4881]: I0121 11:12:06.493163 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerStarted","Data":"ad72f3f3b967cd853e49b723fe72a51bb988f34ee4eb1f5a8162feb15abaf823"} Jan 21 11:12:06 crc kubenswrapper[4881]: I0121 11:12:06.516978 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hd2w7" podStartSLOduration=3.069096171 podStartE2EDuration="6.516958127s" podCreationTimestamp="2026-01-21 11:12:00 +0000 UTC" firstStartedPulling="2026-01-21 11:12:02.366625405 +0000 UTC m=+909.626581874" lastFinishedPulling="2026-01-21 11:12:05.814487351 +0000 UTC m=+913.074443830" observedRunningTime="2026-01-21 11:12:06.516253821 +0000 UTC m=+913.776210300" watchObservedRunningTime="2026-01-21 11:12:06.516958127 +0000 UTC m=+913.776914596" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.537893 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6"] Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.539963 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.541864 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.545526 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6"] Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.680088 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.680167 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.680253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpcpk\" (UniqueName: \"kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.781613 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.781708 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.781803 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpcpk\" (UniqueName: \"kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.782374 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.782420 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.805187 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpcpk\" (UniqueName: \"kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.859543 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:08 crc kubenswrapper[4881]: I0121 11:12:08.399579 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6"] Jan 21 11:12:08 crc kubenswrapper[4881]: I0121 11:12:08.508009 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" event={"ID":"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60","Type":"ContainerStarted","Data":"f6649bc9fafdaf55ba8ea4b9308d5ba6f3cee44fcd008de9d317c8c9bf19faaa"} Jan 21 11:12:10 crc kubenswrapper[4881]: I0121 11:12:10.523062 4881 generic.go:334] "Generic (PLEG): container finished" podID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerID="1882de17e4ad10c71734e26a15c796980da2428ebe8ae69676e484978869d6a9" exitCode=0 Jan 21 11:12:10 crc kubenswrapper[4881]: I0121 11:12:10.523184 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" event={"ID":"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60","Type":"ContainerDied","Data":"1882de17e4ad10c71734e26a15c796980da2428ebe8ae69676e484978869d6a9"} Jan 21 11:12:10 crc kubenswrapper[4881]: I0121 11:12:10.853529 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:10 crc kubenswrapper[4881]: I0121 11:12:10.853592 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:10 crc kubenswrapper[4881]: I0121 11:12:10.911313 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:11 crc kubenswrapper[4881]: I0121 11:12:11.585609 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:12 crc kubenswrapper[4881]: I0121 11:12:12.538796 4881 generic.go:334] "Generic (PLEG): container finished" podID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerID="10a548534daadca5f848109f059acff5c67d2840c1dc7cb3bda7e203f29a597a" exitCode=0 Jan 21 11:12:12 crc kubenswrapper[4881]: I0121 11:12:12.538844 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" event={"ID":"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60","Type":"ContainerDied","Data":"10a548534daadca5f848109f059acff5c67d2840c1dc7cb3bda7e203f29a597a"} Jan 21 11:12:12 crc kubenswrapper[4881]: I0121 11:12:12.878365 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:12 crc kubenswrapper[4881]: I0121 11:12:12.880254 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:12 crc kubenswrapper[4881]: I0121 11:12:12.897947 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.062123 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.062523 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w2kj\" (UniqueName: \"kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.062586 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.164256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.164337 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w2kj\" (UniqueName: \"kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.164396 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.164886 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.164911 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.190564 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w2kj\" (UniqueName: \"kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.194890 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.553502 4881 generic.go:334] "Generic (PLEG): container finished" podID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerID="d9653f5446680e2092e8263cde31db5cc02cb9168f0736fc9b45955301e3269c" exitCode=0 Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.553543 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" event={"ID":"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60","Type":"ContainerDied","Data":"d9653f5446680e2092e8263cde31db5cc02cb9168f0736fc9b45955301e3269c"} Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.597925 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:13 crc kubenswrapper[4881]: W0121 11:12:13.629911 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6e4d311_b0fc_4051_a125_a6bf330b7f8a.slice/crio-74ed369f908f9a46be16b0bf5cbde512da30460d84d754009e5d06f649c85ef9 WatchSource:0}: Error finding container 74ed369f908f9a46be16b0bf5cbde512da30460d84d754009e5d06f649c85ef9: Status 404 returned error can't find the container with id 74ed369f908f9a46be16b0bf5cbde512da30460d84d754009e5d06f649c85ef9 Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.281812 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.282173 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hd2w7" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="registry-server" containerID="cri-o://ad72f3f3b967cd853e49b723fe72a51bb988f34ee4eb1f5a8162feb15abaf823" gracePeriod=2 Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.565749 4881 generic.go:334] "Generic (PLEG): container finished" podID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerID="df3854cc5438f398248beabb77d60eeb96def4d61790e2ebbf7c22c19efc8536" exitCode=0 Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.566170 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerDied","Data":"df3854cc5438f398248beabb77d60eeb96def4d61790e2ebbf7c22c19efc8536"} Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.566205 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerStarted","Data":"74ed369f908f9a46be16b0bf5cbde512da30460d84d754009e5d06f649c85ef9"} Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.570182 4881 generic.go:334] "Generic (PLEG): container finished" podID="9873ada5-628e-4b25-b739-4478cbe17296" containerID="ad72f3f3b967cd853e49b723fe72a51bb988f34ee4eb1f5a8162feb15abaf823" exitCode=0 Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.570435 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerDied","Data":"ad72f3f3b967cd853e49b723fe72a51bb988f34ee4eb1f5a8162feb15abaf823"} Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.695101 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.790585 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content\") pod \"9873ada5-628e-4b25-b739-4478cbe17296\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.790719 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcd7r\" (UniqueName: \"kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r\") pod \"9873ada5-628e-4b25-b739-4478cbe17296\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.790744 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities\") pod \"9873ada5-628e-4b25-b739-4478cbe17296\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.791837 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities" (OuterVolumeSpecName: "utilities") pod "9873ada5-628e-4b25-b739-4478cbe17296" (UID: "9873ada5-628e-4b25-b739-4478cbe17296"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.808592 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r" (OuterVolumeSpecName: "kube-api-access-xcd7r") pod "9873ada5-628e-4b25-b739-4478cbe17296" (UID: "9873ada5-628e-4b25-b739-4478cbe17296"). InnerVolumeSpecName "kube-api-access-xcd7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.843263 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9873ada5-628e-4b25-b739-4478cbe17296" (UID: "9873ada5-628e-4b25-b739-4478cbe17296"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.892854 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcd7r\" (UniqueName: \"kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.892900 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.892915 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.908292 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.994459 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle\") pod \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.994617 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util\") pod \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.994678 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpcpk\" (UniqueName: \"kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk\") pod \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.995856 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle" (OuterVolumeSpecName: "bundle") pod "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" (UID: "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.998017 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk" (OuterVolumeSpecName: "kube-api-access-bpcpk") pod "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" (UID: "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60"). InnerVolumeSpecName "kube-api-access-bpcpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.010595 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util" (OuterVolumeSpecName: "util") pod "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" (UID: "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.095938 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.095997 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpcpk\" (UniqueName: \"kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.096011 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.583651 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.583660 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" event={"ID":"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60","Type":"ContainerDied","Data":"f6649bc9fafdaf55ba8ea4b9308d5ba6f3cee44fcd008de9d317c8c9bf19faaa"} Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.584066 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6649bc9fafdaf55ba8ea4b9308d5ba6f3cee44fcd008de9d317c8c9bf19faaa" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.589217 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.589240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerDied","Data":"03078ae198c32837e1314238531f4e1b4ba354a1768bb3f9c6c3700512d7bdc0"} Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.589317 4881 scope.go:117] "RemoveContainer" containerID="ad72f3f3b967cd853e49b723fe72a51bb988f34ee4eb1f5a8162feb15abaf823" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.594004 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerStarted","Data":"4e6577a7360cb44c2d5f3b476fb5769c5dfcb1d89663a17a2c75099c7b82351e"} Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.613072 4881 scope.go:117] "RemoveContainer" containerID="4449fd6347c9d97dfe10f3a25b7f401eba4d4ff908fbcb0731b3a3e709b1d7fd" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.613543 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.621473 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.637621 4881 scope.go:117] "RemoveContainer" containerID="62a4996a49fa7e70025c2e6c3982db1575edae9d0df4fbdfdba74d92ed4e5ed6" Jan 21 11:12:16 crc kubenswrapper[4881]: I0121 11:12:16.602772 4881 generic.go:334] "Generic (PLEG): container finished" podID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerID="4e6577a7360cb44c2d5f3b476fb5769c5dfcb1d89663a17a2c75099c7b82351e" exitCode=0 Jan 21 11:12:16 crc kubenswrapper[4881]: I0121 11:12:16.603552 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerDied","Data":"4e6577a7360cb44c2d5f3b476fb5769c5dfcb1d89663a17a2c75099c7b82351e"} Jan 21 11:12:17 crc kubenswrapper[4881]: I0121 11:12:17.318892 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9873ada5-628e-4b25-b739-4478cbe17296" path="/var/lib/kubelet/pods/9873ada5-628e-4b25-b739-4478cbe17296/volumes" Jan 21 11:12:17 crc kubenswrapper[4881]: I0121 11:12:17.613191 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerStarted","Data":"d310d8932ee76007b52918c753c9b1348a7685ba6e304db302f41aad72fcd953"} Jan 21 11:12:17 crc kubenswrapper[4881]: I0121 11:12:17.654019 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g5nz8" podStartSLOduration=3.071560317 podStartE2EDuration="5.653992396s" podCreationTimestamp="2026-01-21 11:12:12 +0000 UTC" firstStartedPulling="2026-01-21 11:12:14.567803422 +0000 UTC m=+921.827759891" lastFinishedPulling="2026-01-21 11:12:17.150235511 +0000 UTC m=+924.410191970" observedRunningTime="2026-01-21 11:12:17.649751022 +0000 UTC m=+924.909707491" watchObservedRunningTime="2026-01-21 11:12:17.653992396 +0000 UTC m=+924.913948865" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.197425 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.198706 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.267678 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583364 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9"] Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583617 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="extract-content" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583631 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="extract-content" Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583643 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="registry-server" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583649 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="registry-server" Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583661 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="pull" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583667 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="pull" Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583686 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="util" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583695 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="util" Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583705 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="extract" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583711 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="extract" Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583750 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="extract-utilities" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583757 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="extract-utilities" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583876 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="registry-server" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583889 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="extract" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.584323 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.587024 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.592635 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.593003 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.593278 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.593445 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gkkls" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.619268 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9"] Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.628523 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5kxv\" (UniqueName: \"kubernetes.io/projected/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-kube-api-access-f5kxv\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.628977 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-apiservice-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.629201 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-webhook-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.721804 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.730516 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-apiservice-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.730619 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-webhook-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.730654 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5kxv\" (UniqueName: \"kubernetes.io/projected/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-kube-api-access-f5kxv\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.740875 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-apiservice-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.753839 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-webhook-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.759395 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5kxv\" (UniqueName: \"kubernetes.io/projected/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-kube-api-access-f5kxv\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.907045 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.064128 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r"] Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.065274 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.074711 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.075152 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.077165 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-k6r4l" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.078499 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r"] Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.136902 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-webhook-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.137019 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-apiservice-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.137045 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgkn4\" (UniqueName: \"kubernetes.io/projected/a194c95e-cbcb-4d7e-a631-d4a14989e985-kube-api-access-pgkn4\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.239246 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-webhook-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.239404 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-apiservice-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.239437 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgkn4\" (UniqueName: \"kubernetes.io/projected/a194c95e-cbcb-4d7e-a631-d4a14989e985-kube-api-access-pgkn4\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.246041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-apiservice-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.260345 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-webhook-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.264633 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgkn4\" (UniqueName: \"kubernetes.io/projected/a194c95e-cbcb-4d7e-a631-d4a14989e985-kube-api-access-pgkn4\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.410654 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.482957 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9"] Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.672254 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" event={"ID":"769e47b6-bd47-489d-9b99-4f2f0e30c4fd","Type":"ContainerStarted","Data":"20e8f1b52592529f288c94fb5f111cf2cc975c6b295bf0d52fff52c2eb16673e"} Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.848337 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r"] Jan 21 11:12:24 crc kubenswrapper[4881]: W0121 11:12:24.865850 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda194c95e_cbcb_4d7e_a631_d4a14989e985.slice/crio-427169d05c633df7c1574e0313ec71f4482be0ad8692d2b529198fdb6de67c46 WatchSource:0}: Error finding container 427169d05c633df7c1574e0313ec71f4482be0ad8692d2b529198fdb6de67c46: Status 404 returned error can't find the container with id 427169d05c633df7c1574e0313ec71f4482be0ad8692d2b529198fdb6de67c46 Jan 21 11:12:25 crc kubenswrapper[4881]: I0121 11:12:25.682331 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" event={"ID":"a194c95e-cbcb-4d7e-a631-d4a14989e985","Type":"ContainerStarted","Data":"427169d05c633df7c1574e0313ec71f4482be0ad8692d2b529198fdb6de67c46"} Jan 21 11:12:25 crc kubenswrapper[4881]: I0121 11:12:25.872767 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:25 crc kubenswrapper[4881]: I0121 11:12:25.873153 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g5nz8" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="registry-server" containerID="cri-o://d310d8932ee76007b52918c753c9b1348a7685ba6e304db302f41aad72fcd953" gracePeriod=2 Jan 21 11:12:26 crc kubenswrapper[4881]: I0121 11:12:26.691323 4881 generic.go:334] "Generic (PLEG): container finished" podID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerID="d310d8932ee76007b52918c753c9b1348a7685ba6e304db302f41aad72fcd953" exitCode=0 Jan 21 11:12:26 crc kubenswrapper[4881]: I0121 11:12:26.691395 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerDied","Data":"d310d8932ee76007b52918c753c9b1348a7685ba6e304db302f41aad72fcd953"} Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.054778 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.155289 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w2kj\" (UniqueName: \"kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj\") pod \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.155396 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content\") pod \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.155443 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities\") pod \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.156634 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities" (OuterVolumeSpecName: "utilities") pod "c6e4d311-b0fc-4051-a125-a6bf330b7f8a" (UID: "c6e4d311-b0fc-4051-a125-a6bf330b7f8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.176664 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj" (OuterVolumeSpecName: "kube-api-access-5w2kj") pod "c6e4d311-b0fc-4051-a125-a6bf330b7f8a" (UID: "c6e4d311-b0fc-4051-a125-a6bf330b7f8a"). InnerVolumeSpecName "kube-api-access-5w2kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.228365 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6e4d311-b0fc-4051-a125-a6bf330b7f8a" (UID: "c6e4d311-b0fc-4051-a125-a6bf330b7f8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.256845 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.256902 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.256917 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w2kj\" (UniqueName: \"kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.859510 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerDied","Data":"74ed369f908f9a46be16b0bf5cbde512da30460d84d754009e5d06f649c85ef9"} Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.859870 4881 scope.go:117] "RemoveContainer" containerID="d310d8932ee76007b52918c753c9b1348a7685ba6e304db302f41aad72fcd953" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.859611 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.890117 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.903294 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:31 crc kubenswrapper[4881]: I0121 11:12:31.320814 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" path="/var/lib/kubelet/pods/c6e4d311-b0fc-4051-a125-a6bf330b7f8a/volumes" Jan 21 11:12:31 crc kubenswrapper[4881]: I0121 11:12:31.453465 4881 scope.go:117] "RemoveContainer" containerID="4e6577a7360cb44c2d5f3b476fb5769c5dfcb1d89663a17a2c75099c7b82351e" Jan 21 11:12:31 crc kubenswrapper[4881]: I0121 11:12:31.477123 4881 scope.go:117] "RemoveContainer" containerID="df3854cc5438f398248beabb77d60eeb96def4d61790e2ebbf7c22c19efc8536" Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.001147 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" event={"ID":"769e47b6-bd47-489d-9b99-4f2f0e30c4fd","Type":"ContainerStarted","Data":"469d1a84fb7e1143a635a0f240ac0d81c15df0f6c6c64f3850c3a77fe34829fa"} Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.001520 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.002914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" event={"ID":"a194c95e-cbcb-4d7e-a631-d4a14989e985","Type":"ContainerStarted","Data":"6411a36a2b5fe0479760caffbd2a44059e4f587e831cd6f791fa64032702af1d"} Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.003318 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.033279 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" podStartSLOduration=2.104741474 podStartE2EDuration="9.033263944s" podCreationTimestamp="2026-01-21 11:12:23 +0000 UTC" firstStartedPulling="2026-01-21 11:12:24.527332943 +0000 UTC m=+931.787289412" lastFinishedPulling="2026-01-21 11:12:31.455855403 +0000 UTC m=+938.715811882" observedRunningTime="2026-01-21 11:12:32.027924842 +0000 UTC m=+939.287881311" watchObservedRunningTime="2026-01-21 11:12:32.033263944 +0000 UTC m=+939.293220413" Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.057872 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" podStartSLOduration=1.2838363529999999 podStartE2EDuration="8.057850206s" podCreationTimestamp="2026-01-21 11:12:24 +0000 UTC" firstStartedPulling="2026-01-21 11:12:24.869605622 +0000 UTC m=+932.129562091" lastFinishedPulling="2026-01-21 11:12:31.643619475 +0000 UTC m=+938.903575944" observedRunningTime="2026-01-21 11:12:32.055058538 +0000 UTC m=+939.315015007" watchObservedRunningTime="2026-01-21 11:12:32.057850206 +0000 UTC m=+939.317806705" Jan 21 11:12:44 crc kubenswrapper[4881]: I0121 11:12:44.420365 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.278699 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:03 crc kubenswrapper[4881]: E0121 11:13:03.279514 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="registry-server" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.279533 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="registry-server" Jan 21 11:13:03 crc kubenswrapper[4881]: E0121 11:13:03.279561 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="extract-utilities" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.279570 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="extract-utilities" Jan 21 11:13:03 crc kubenswrapper[4881]: E0121 11:13:03.279578 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="extract-content" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.279584 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="extract-content" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.279700 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="registry-server" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.280678 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.405490 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.455747 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.455845 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.455933 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tqdx\" (UniqueName: \"kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.558030 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.558081 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.558102 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tqdx\" (UniqueName: \"kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.558685 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.558752 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.586709 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tqdx\" (UniqueName: \"kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.700626 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.910373 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.471035 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.680877 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerStarted","Data":"185d2460e59c873ad3336643088425b69bafc3d60d4435c226adb50269ff2c1b"} Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.888384 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lm54h"] Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.891405 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.894043 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.894455 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8pmrf" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.895029 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.900659 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk"] Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.901885 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.903597 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.929862 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk"] Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.004729 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-697j4"] Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.006255 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.022804 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.022830 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.023062 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7hvdd" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.024362 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.035458 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-dmwlt"] Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.036434 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.037772 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.059341 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dmwlt"] Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080517 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080586 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-startup\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080623 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjk8\" (UniqueName: \"kubernetes.io/projected/d055f37b-fab0-4fd0-b683-4a7974b21ad5-kube-api-access-hvjk8\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080652 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-reloader\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080841 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-sockets\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080893 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtlk4\" (UniqueName: \"kubernetes.io/projected/eaaea696-21d8-4963-8364-82fa7bbb0e19-kube-api-access-jtlk4\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080957 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080989 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-conf\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.183706 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f265a6e2-ea90-45ea-89c0-178d25243784-metallb-excludel2\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.183994 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-cert\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184041 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184087 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm47n\" (UniqueName: \"kubernetes.io/projected/f265a6e2-ea90-45ea-89c0-178d25243784-kube-api-access-wm47n\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184110 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184208 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-startup\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184272 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjk8\" (UniqueName: \"kubernetes.io/projected/d055f37b-fab0-4fd0-b683-4a7974b21ad5-kube-api-access-hvjk8\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184289 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-metrics-certs\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184321 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184343 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-reloader\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-sockets\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtlk4\" (UniqueName: \"kubernetes.io/projected/eaaea696-21d8-4963-8364-82fa7bbb0e19-kube-api-access-jtlk4\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184456 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6d7k\" (UniqueName: \"kubernetes.io/projected/c4a109b4-26ee-4a46-9333-989cf87c0ff7-kube-api-access-b6d7k\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184501 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-conf\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.185001 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-conf\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.185253 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.185363 4881 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.185439 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs podName:d055f37b-fab0-4fd0-b683-4a7974b21ad5 nodeName:}" failed. No retries permitted until 2026-01-21 11:13:05.685397302 +0000 UTC m=+972.945353771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs") pod "frr-k8s-lm54h" (UID: "d055f37b-fab0-4fd0-b683-4a7974b21ad5") : secret "frr-k8s-certs-secret" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.185464 4881 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.185492 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-reloader\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.185559 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert podName:eaaea696-21d8-4963-8364-82fa7bbb0e19 nodeName:}" failed. No retries permitted until 2026-01-21 11:13:05.685536235 +0000 UTC m=+972.945492704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert") pod "frr-k8s-webhook-server-7df86c4f6c-tzxpk" (UID: "eaaea696-21d8-4963-8364-82fa7bbb0e19") : secret "frr-k8s-webhook-server-cert" not found Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.185662 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-sockets\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.186403 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-startup\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.212497 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjk8\" (UniqueName: \"kubernetes.io/projected/d055f37b-fab0-4fd0-b683-4a7974b21ad5-kube-api-access-hvjk8\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.213462 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtlk4\" (UniqueName: \"kubernetes.io/projected/eaaea696-21d8-4963-8364-82fa7bbb0e19-kube-api-access-jtlk4\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285746 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6d7k\" (UniqueName: \"kubernetes.io/projected/c4a109b4-26ee-4a46-9333-989cf87c0ff7-kube-api-access-b6d7k\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285902 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f265a6e2-ea90-45ea-89c0-178d25243784-metallb-excludel2\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-cert\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285952 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285977 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm47n\" (UniqueName: \"kubernetes.io/projected/f265a6e2-ea90-45ea-89c0-178d25243784-kube-api-access-wm47n\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285995 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.286036 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-metrics-certs\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.286340 4881 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.286409 4881 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.286416 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs podName:c4a109b4-26ee-4a46-9333-989cf87c0ff7 nodeName:}" failed. No retries permitted until 2026-01-21 11:13:05.786399307 +0000 UTC m=+973.046355776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs") pod "controller-6968d8fdc4-dmwlt" (UID: "c4a109b4-26ee-4a46-9333-989cf87c0ff7") : secret "controller-certs-secret" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.286497 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist podName:f265a6e2-ea90-45ea-89c0-178d25243784 nodeName:}" failed. No retries permitted until 2026-01-21 11:13:05.786479189 +0000 UTC m=+973.046435848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist") pod "speaker-697j4" (UID: "f265a6e2-ea90-45ea-89c0-178d25243784") : secret "metallb-memberlist" not found Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.286949 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f265a6e2-ea90-45ea-89c0-178d25243784-metallb-excludel2\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.290700 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.291968 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-metrics-certs\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.300485 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-cert\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.309238 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm47n\" (UniqueName: \"kubernetes.io/projected/f265a6e2-ea90-45ea-89c0-178d25243784-kube-api-access-wm47n\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.309432 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6d7k\" (UniqueName: \"kubernetes.io/projected/c4a109b4-26ee-4a46-9333-989cf87c0ff7-kube-api-access-b6d7k\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.689494 4881 generic.go:334] "Generic (PLEG): container finished" podID="998c47dc-b621-4357-86b9-f6d08cac4799" containerID="00ecec7a68182aee750726e487cfdfc0600f11f9060a5afa0e042e40441982a2" exitCode=0 Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.689565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerDied","Data":"00ecec7a68182aee750726e487cfdfc0600f11f9060a5afa0e042e40441982a2"} Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.692340 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.692471 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.697809 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.697870 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.793652 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.793727 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.794113 4881 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.794292 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist podName:f265a6e2-ea90-45ea-89c0-178d25243784 nodeName:}" failed. No retries permitted until 2026-01-21 11:13:06.794257503 +0000 UTC m=+974.054213972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist") pod "speaker-697j4" (UID: "f265a6e2-ea90-45ea-89c0-178d25243784") : secret "metallb-memberlist" not found Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.797461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.811611 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.830380 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.957898 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.160506 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk"] Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.416418 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dmwlt"] Jan 21 11:13:06 crc kubenswrapper[4881]: W0121 11:13:06.430428 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4a109b4_26ee_4a46_9333_989cf87c0ff7.slice/crio-24f6f21f9b2e4dd8131b07c5470a9e16b9dfebe17a0d82d12012117bced5092e WatchSource:0}: Error finding container 24f6f21f9b2e4dd8131b07c5470a9e16b9dfebe17a0d82d12012117bced5092e: Status 404 returned error can't find the container with id 24f6f21f9b2e4dd8131b07c5470a9e16b9dfebe17a0d82d12012117bced5092e Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.697035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dmwlt" event={"ID":"c4a109b4-26ee-4a46-9333-989cf87c0ff7","Type":"ContainerStarted","Data":"a91bd133ef7136c69b92dec15f0d672ed0deb342d0d1dae3dfb907b1b16ba47b"} Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.697406 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dmwlt" event={"ID":"c4a109b4-26ee-4a46-9333-989cf87c0ff7","Type":"ContainerStarted","Data":"24f6f21f9b2e4dd8131b07c5470a9e16b9dfebe17a0d82d12012117bced5092e"} Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.698057 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"ba3b897ddc85e913095024b0a90e493360ed4e2ec3bcac8b299171b6eee171f1"} Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.698841 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" event={"ID":"eaaea696-21d8-4963-8364-82fa7bbb0e19","Type":"ContainerStarted","Data":"fc0338162f9b9cd0a25a9ae9f7c0651b7e1179bdd0e328740478ee12dbddf32f"} Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.810958 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.817049 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.850763 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-697j4" Jan 21 11:13:06 crc kubenswrapper[4881]: W0121 11:13:06.883025 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf265a6e2_ea90_45ea_89c0_178d25243784.slice/crio-1b6464c369f82f6432ae53745f95d29b0241cc9ac91966100f6f1b57a49ed3db WatchSource:0}: Error finding container 1b6464c369f82f6432ae53745f95d29b0241cc9ac91966100f6f1b57a49ed3db: Status 404 returned error can't find the container with id 1b6464c369f82f6432ae53745f95d29b0241cc9ac91966100f6f1b57a49ed3db Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.755576 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-697j4" event={"ID":"f265a6e2-ea90-45ea-89c0-178d25243784","Type":"ContainerStarted","Data":"cf6e40113ac1676c1cf69f9415032710d03dc03be9ba5f02d85ea035ca382bd5"} Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.755848 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-697j4" event={"ID":"f265a6e2-ea90-45ea-89c0-178d25243784","Type":"ContainerStarted","Data":"1b6464c369f82f6432ae53745f95d29b0241cc9ac91966100f6f1b57a49ed3db"} Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.760487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerStarted","Data":"c2f36538556042a4c3ef112ac5ba0181ebb2721edcd599559000130ae467ead0"} Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.775898 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dmwlt" event={"ID":"c4a109b4-26ee-4a46-9333-989cf87c0ff7","Type":"ContainerStarted","Data":"93878269955d9d98c70f249b3d5011b15157e9e8047207419b5ef1c476a12239"} Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.776243 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.832472 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-dmwlt" podStartSLOduration=3.832452704 podStartE2EDuration="3.832452704s" podCreationTimestamp="2026-01-21 11:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:13:07.826962769 +0000 UTC m=+975.086919258" watchObservedRunningTime="2026-01-21 11:13:07.832452704 +0000 UTC m=+975.092409163" Jan 21 11:13:08 crc kubenswrapper[4881]: I0121 11:13:08.996318 4881 generic.go:334] "Generic (PLEG): container finished" podID="998c47dc-b621-4357-86b9-f6d08cac4799" containerID="c2f36538556042a4c3ef112ac5ba0181ebb2721edcd599559000130ae467ead0" exitCode=0 Jan 21 11:13:08 crc kubenswrapper[4881]: I0121 11:13:08.996409 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerDied","Data":"c2f36538556042a4c3ef112ac5ba0181ebb2721edcd599559000130ae467ead0"} Jan 21 11:13:08 crc kubenswrapper[4881]: I0121 11:13:08.998988 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-697j4" event={"ID":"f265a6e2-ea90-45ea-89c0-178d25243784","Type":"ContainerStarted","Data":"118a20b6920d9027d3f333741d5e78a878cf93b17bbd2a13df0fb533425784f2"} Jan 21 11:13:09 crc kubenswrapper[4881]: I0121 11:13:09.039035 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-697j4" podStartSLOduration=5.039017192 podStartE2EDuration="5.039017192s" podCreationTimestamp="2026-01-21 11:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:13:09.035007164 +0000 UTC m=+976.294963623" watchObservedRunningTime="2026-01-21 11:13:09.039017192 +0000 UTC m=+976.298973661" Jan 21 11:13:10 crc kubenswrapper[4881]: I0121 11:13:10.356841 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-697j4" Jan 21 11:13:11 crc kubenswrapper[4881]: I0121 11:13:11.400092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerStarted","Data":"d0bb3056956d79836bd57985c9844270d4cb4c95a3ec04cb84f31deaf080579b"} Jan 21 11:13:11 crc kubenswrapper[4881]: I0121 11:13:11.443174 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rvptz" podStartSLOduration=5.818267851 podStartE2EDuration="8.44314499s" podCreationTimestamp="2026-01-21 11:13:03 +0000 UTC" firstStartedPulling="2026-01-21 11:13:05.693731189 +0000 UTC m=+972.953687658" lastFinishedPulling="2026-01-21 11:13:08.318608338 +0000 UTC m=+975.578564797" observedRunningTime="2026-01-21 11:13:11.42720851 +0000 UTC m=+978.687164979" watchObservedRunningTime="2026-01-21 11:13:11.44314499 +0000 UTC m=+978.703101459" Jan 21 11:13:13 crc kubenswrapper[4881]: I0121 11:13:13.701678 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:13 crc kubenswrapper[4881]: I0121 11:13:13.701742 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:13 crc kubenswrapper[4881]: I0121 11:13:13.758652 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:15 crc kubenswrapper[4881]: I0121 11:13:15.158700 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:15 crc kubenswrapper[4881]: I0121 11:13:15.224766 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:17 crc kubenswrapper[4881]: I0121 11:13:17.061136 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rvptz" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="registry-server" containerID="cri-o://d0bb3056956d79836bd57985c9844270d4cb4c95a3ec04cb84f31deaf080579b" gracePeriod=2 Jan 21 11:13:18 crc kubenswrapper[4881]: I0121 11:13:18.075514 4881 generic.go:334] "Generic (PLEG): container finished" podID="998c47dc-b621-4357-86b9-f6d08cac4799" containerID="d0bb3056956d79836bd57985c9844270d4cb4c95a3ec04cb84f31deaf080579b" exitCode=0 Jan 21 11:13:18 crc kubenswrapper[4881]: I0121 11:13:18.075562 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerDied","Data":"d0bb3056956d79836bd57985c9844270d4cb4c95a3ec04cb84f31deaf080579b"} Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.427148 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.545616 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content\") pod \"998c47dc-b621-4357-86b9-f6d08cac4799\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.545677 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tqdx\" (UniqueName: \"kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx\") pod \"998c47dc-b621-4357-86b9-f6d08cac4799\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.545813 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities\") pod \"998c47dc-b621-4357-86b9-f6d08cac4799\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.547132 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities" (OuterVolumeSpecName: "utilities") pod "998c47dc-b621-4357-86b9-f6d08cac4799" (UID: "998c47dc-b621-4357-86b9-f6d08cac4799"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.558680 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx" (OuterVolumeSpecName: "kube-api-access-7tqdx") pod "998c47dc-b621-4357-86b9-f6d08cac4799" (UID: "998c47dc-b621-4357-86b9-f6d08cac4799"). InnerVolumeSpecName "kube-api-access-7tqdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.574837 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "998c47dc-b621-4357-86b9-f6d08cac4799" (UID: "998c47dc-b621-4357-86b9-f6d08cac4799"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.647551 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.647622 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tqdx\" (UniqueName: \"kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.647636 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.100469 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" event={"ID":"eaaea696-21d8-4963-8364-82fa7bbb0e19","Type":"ContainerStarted","Data":"d43e06f6fdfda916124c7f45ddca7862ea152d5ecb818596e3705da2a15518d1"} Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.100879 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.102488 4881 generic.go:334] "Generic (PLEG): container finished" podID="d055f37b-fab0-4fd0-b683-4a7974b21ad5" containerID="3746b5b9f53d7fdfe487182eb76a95aae4a70045e175b2a0be1c96278628b944" exitCode=0 Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.102691 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerDied","Data":"3746b5b9f53d7fdfe487182eb76a95aae4a70045e175b2a0be1c96278628b944"} Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.105269 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerDied","Data":"185d2460e59c873ad3336643088425b69bafc3d60d4435c226adb50269ff2c1b"} Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.105371 4881 scope.go:117] "RemoveContainer" containerID="d0bb3056956d79836bd57985c9844270d4cb4c95a3ec04cb84f31deaf080579b" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.105343 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.132873 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" podStartSLOduration=3.011279018 podStartE2EDuration="17.132838009s" podCreationTimestamp="2026-01-21 11:13:04 +0000 UTC" firstStartedPulling="2026-01-21 11:13:06.172747348 +0000 UTC m=+973.432703817" lastFinishedPulling="2026-01-21 11:13:20.294306339 +0000 UTC m=+987.554262808" observedRunningTime="2026-01-21 11:13:21.126589385 +0000 UTC m=+988.386545854" watchObservedRunningTime="2026-01-21 11:13:21.132838009 +0000 UTC m=+988.392794488" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.149548 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.155102 4881 scope.go:117] "RemoveContainer" containerID="c2f36538556042a4c3ef112ac5ba0181ebb2721edcd599559000130ae467ead0" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.156363 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.219751 4881 scope.go:117] "RemoveContainer" containerID="00ecec7a68182aee750726e487cfdfc0600f11f9060a5afa0e042e40441982a2" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.414307 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" path="/var/lib/kubelet/pods/998c47dc-b621-4357-86b9-f6d08cac4799/volumes" Jan 21 11:13:22 crc kubenswrapper[4881]: I0121 11:13:22.113172 4881 generic.go:334] "Generic (PLEG): container finished" podID="d055f37b-fab0-4fd0-b683-4a7974b21ad5" containerID="cc533ffdf1fe3cc98221465f5f7fa5ec0769b8130e1ee2c7bcec6655e3618f56" exitCode=0 Jan 21 11:13:22 crc kubenswrapper[4881]: I0121 11:13:22.113260 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerDied","Data":"cc533ffdf1fe3cc98221465f5f7fa5ec0769b8130e1ee2c7bcec6655e3618f56"} Jan 21 11:13:23 crc kubenswrapper[4881]: I0121 11:13:23.126638 4881 generic.go:334] "Generic (PLEG): container finished" podID="d055f37b-fab0-4fd0-b683-4a7974b21ad5" containerID="a68669dfd67af511bc056281db7a5556d9a70faa9d9b9116e660ec6356a708d9" exitCode=0 Jan 21 11:13:23 crc kubenswrapper[4881]: I0121 11:13:23.126751 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerDied","Data":"a68669dfd67af511bc056281db7a5556d9a70faa9d9b9116e660ec6356a708d9"} Jan 21 11:13:24 crc kubenswrapper[4881]: I0121 11:13:24.140948 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"677e4b3919eac7c3150478c52ae85bbe28623e8af9b17d6d1436d08620cb3123"} Jan 21 11:13:24 crc kubenswrapper[4881]: I0121 11:13:24.141244 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"7879ae745d39cd51daf63d47f3f53004e405e3baca350d1c1c59a026d40cde2a"} Jan 21 11:13:24 crc kubenswrapper[4881]: I0121 11:13:24.141254 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"1c339abc1a01b23b06dd105a1305c5d3b86b4f64ea15b284aca2debb9a62ffe4"} Jan 21 11:13:24 crc kubenswrapper[4881]: I0121 11:13:24.141263 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"230354f80d8522c72349de08951f7edb532da33e2c1091edcaf49a586219b704"} Jan 21 11:13:24 crc kubenswrapper[4881]: I0121 11:13:24.141271 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"27a254648b2c6070da76d6cb8b28bdbbae1cab2c6167b35b9c1f026d61a91c19"} Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.153867 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"b41c533276ceeb71e3f4e8063c94eb323347149a9bda0bd23a2f44435925439a"} Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.154158 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.812179 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.872874 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.897452 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lm54h" podStartSLOduration=8.087414374 podStartE2EDuration="21.897419289s" podCreationTimestamp="2026-01-21 11:13:04 +0000 UTC" firstStartedPulling="2026-01-21 11:13:06.513024257 +0000 UTC m=+973.772980726" lastFinishedPulling="2026-01-21 11:13:20.323029172 +0000 UTC m=+987.582985641" observedRunningTime="2026-01-21 11:13:25.207602632 +0000 UTC m=+992.467559101" watchObservedRunningTime="2026-01-21 11:13:25.897419289 +0000 UTC m=+993.157375758" Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.966602 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:26 crc kubenswrapper[4881]: I0121 11:13:26.864372 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-697j4" Jan 21 11:13:29 crc kubenswrapper[4881]: I0121 11:13:29.851432 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:13:29 crc kubenswrapper[4881]: I0121 11:13:29.851759 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.280745 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:30 crc kubenswrapper[4881]: E0121 11:13:30.281070 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="extract-content" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.281084 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="extract-content" Jan 21 11:13:30 crc kubenswrapper[4881]: E0121 11:13:30.281100 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="extract-utilities" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.281107 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="extract-utilities" Jan 21 11:13:30 crc kubenswrapper[4881]: E0121 11:13:30.281133 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="registry-server" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.281140 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="registry-server" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.281261 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="registry-server" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.281873 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.285465 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.285568 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.287765 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tq8v2" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.297277 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.445158 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lnht\" (UniqueName: \"kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht\") pod \"openstack-operator-index-67hkt\" (UID: \"7e121e55-2150-44d1-befa-4b94a3103b31\") " pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.547298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lnht\" (UniqueName: \"kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht\") pod \"openstack-operator-index-67hkt\" (UID: \"7e121e55-2150-44d1-befa-4b94a3103b31\") " pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.575849 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lnht\" (UniqueName: \"kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht\") pod \"openstack-operator-index-67hkt\" (UID: \"7e121e55-2150-44d1-befa-4b94a3103b31\") " pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.608321 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:31 crc kubenswrapper[4881]: I0121 11:13:31.133359 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:31 crc kubenswrapper[4881]: I0121 11:13:31.196448 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67hkt" event={"ID":"7e121e55-2150-44d1-befa-4b94a3103b31","Type":"ContainerStarted","Data":"0521691acf8b75de45ecf22882ef2ca1bdfabc44c0c161991c4d6c423318f707"} Jan 21 11:13:33 crc kubenswrapper[4881]: I0121 11:13:33.661553 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.268289 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7vz4j"] Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.270264 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.276065 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7vz4j"] Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.321675 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgsrv\" (UniqueName: \"kubernetes.io/projected/0a051fc2-b6e4-463c-bb0a-b565d12b21b4-kube-api-access-pgsrv\") pod \"openstack-operator-index-7vz4j\" (UID: \"0a051fc2-b6e4-463c-bb0a-b565d12b21b4\") " pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.422694 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgsrv\" (UniqueName: \"kubernetes.io/projected/0a051fc2-b6e4-463c-bb0a-b565d12b21b4-kube-api-access-pgsrv\") pod \"openstack-operator-index-7vz4j\" (UID: \"0a051fc2-b6e4-463c-bb0a-b565d12b21b4\") " pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.445701 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgsrv\" (UniqueName: \"kubernetes.io/projected/0a051fc2-b6e4-463c-bb0a-b565d12b21b4-kube-api-access-pgsrv\") pod \"openstack-operator-index-7vz4j\" (UID: \"0a051fc2-b6e4-463c-bb0a-b565d12b21b4\") " pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.597020 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.229160 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67hkt" event={"ID":"7e121e55-2150-44d1-befa-4b94a3103b31","Type":"ContainerStarted","Data":"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c"} Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.229334 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-67hkt" podUID="7e121e55-2150-44d1-befa-4b94a3103b31" containerName="registry-server" containerID="cri-o://eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c" gracePeriod=2 Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.272127 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-67hkt" podStartSLOduration=1.475895634 podStartE2EDuration="5.272102327s" podCreationTimestamp="2026-01-21 11:13:30 +0000 UTC" firstStartedPulling="2026-01-21 11:13:31.147204295 +0000 UTC m=+998.407160764" lastFinishedPulling="2026-01-21 11:13:34.943410988 +0000 UTC m=+1002.203367457" observedRunningTime="2026-01-21 11:13:35.270513848 +0000 UTC m=+1002.530470317" watchObservedRunningTime="2026-01-21 11:13:35.272102327 +0000 UTC m=+1002.532058796" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.328978 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7vz4j"] Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.616614 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-67hkt_7e121e55-2150-44d1-befa-4b94a3103b31/registry-server/0.log" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.617037 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.644015 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lnht\" (UniqueName: \"kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht\") pod \"7e121e55-2150-44d1-befa-4b94a3103b31\" (UID: \"7e121e55-2150-44d1-befa-4b94a3103b31\") " Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.656054 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht" (OuterVolumeSpecName: "kube-api-access-2lnht") pod "7e121e55-2150-44d1-befa-4b94a3103b31" (UID: "7e121e55-2150-44d1-befa-4b94a3103b31"). InnerVolumeSpecName "kube-api-access-2lnht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.745996 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lnht\" (UniqueName: \"kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.815388 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.840121 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238775 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-67hkt_7e121e55-2150-44d1-befa-4b94a3103b31/registry-server/0.log" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238862 4881 generic.go:334] "Generic (PLEG): container finished" podID="7e121e55-2150-44d1-befa-4b94a3103b31" containerID="eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c" exitCode=2 Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238933 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67hkt" event={"ID":"7e121e55-2150-44d1-befa-4b94a3103b31","Type":"ContainerDied","Data":"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c"} Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238955 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238967 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67hkt" event={"ID":"7e121e55-2150-44d1-befa-4b94a3103b31","Type":"ContainerDied","Data":"0521691acf8b75de45ecf22882ef2ca1bdfabc44c0c161991c4d6c423318f707"} Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238992 4881 scope.go:117] "RemoveContainer" containerID="eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.241390 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7vz4j" event={"ID":"0a051fc2-b6e4-463c-bb0a-b565d12b21b4","Type":"ContainerStarted","Data":"1b649bce78bf889841cb871a4ee4082eda5d5cc10688bb8f702507dc432c51ae"} Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.241421 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7vz4j" event={"ID":"0a051fc2-b6e4-463c-bb0a-b565d12b21b4","Type":"ContainerStarted","Data":"fc784969ca98acbbed6abcceecefb978ca22b1208b7ed890aa07ebbb725298a5"} Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.263108 4881 scope.go:117] "RemoveContainer" containerID="eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c" Jan 21 11:13:36 crc kubenswrapper[4881]: E0121 11:13:36.266538 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c\": container with ID starting with eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c not found: ID does not exist" containerID="eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.266605 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c"} err="failed to get container status \"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c\": rpc error: code = NotFound desc = could not find container \"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c\": container with ID starting with eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c not found: ID does not exist" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.269101 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7vz4j" podStartSLOduration=2.212353746 podStartE2EDuration="2.269068697s" podCreationTimestamp="2026-01-21 11:13:34 +0000 UTC" firstStartedPulling="2026-01-21 11:13:35.353981685 +0000 UTC m=+1002.613938154" lastFinishedPulling="2026-01-21 11:13:35.410696636 +0000 UTC m=+1002.670653105" observedRunningTime="2026-01-21 11:13:36.265240943 +0000 UTC m=+1003.525197432" watchObservedRunningTime="2026-01-21 11:13:36.269068697 +0000 UTC m=+1003.529025166" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.286007 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.292673 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:37 crc kubenswrapper[4881]: I0121 11:13:37.326182 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e121e55-2150-44d1-befa-4b94a3103b31" path="/var/lib/kubelet/pods/7e121e55-2150-44d1-befa-4b94a3103b31/volumes" Jan 21 11:13:44 crc kubenswrapper[4881]: I0121 11:13:44.598239 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:44 crc kubenswrapper[4881]: I0121 11:13:44.598920 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:44 crc kubenswrapper[4881]: I0121 11:13:44.628240 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:45 crc kubenswrapper[4881]: I0121 11:13:45.340447 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.502812 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l"] Jan 21 11:13:46 crc kubenswrapper[4881]: E0121 11:13:46.503320 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e121e55-2150-44d1-befa-4b94a3103b31" containerName="registry-server" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.503343 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e121e55-2150-44d1-befa-4b94a3103b31" containerName="registry-server" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.503539 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e121e55-2150-44d1-befa-4b94a3103b31" containerName="registry-server" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.505051 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.508135 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9qzn5" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.511524 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l"] Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.515029 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxcp4\" (UniqueName: \"kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.515130 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.515248 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.616408 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxcp4\" (UniqueName: \"kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.616520 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.616597 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.617323 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.617369 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.641083 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxcp4\" (UniqueName: \"kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.829635 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:47 crc kubenswrapper[4881]: I0121 11:13:47.288724 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l"] Jan 21 11:13:47 crc kubenswrapper[4881]: W0121 11:13:47.297593 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c737afe_a2ad_4075_acd6_9f73aada0e4b.slice/crio-32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010 WatchSource:0}: Error finding container 32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010: Status 404 returned error can't find the container with id 32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010 Jan 21 11:13:47 crc kubenswrapper[4881]: I0121 11:13:47.345487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" event={"ID":"1c737afe-a2ad-4075-acd6-9f73aada0e4b","Type":"ContainerStarted","Data":"32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010"} Jan 21 11:13:50 crc kubenswrapper[4881]: I0121 11:13:50.368942 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerID="507e5bcd4990d6cae98f2c67f74453ce637d733ec2bab01139b31d40784c1782" exitCode=0 Jan 21 11:13:50 crc kubenswrapper[4881]: I0121 11:13:50.369262 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" event={"ID":"1c737afe-a2ad-4075-acd6-9f73aada0e4b","Type":"ContainerDied","Data":"507e5bcd4990d6cae98f2c67f74453ce637d733ec2bab01139b31d40784c1782"} Jan 21 11:13:51 crc kubenswrapper[4881]: I0121 11:13:51.383274 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerID="8070ffff0d68dc11586cc4bdbf539020f6756380dd8f4480fc2534e1e0554f8a" exitCode=0 Jan 21 11:13:51 crc kubenswrapper[4881]: I0121 11:13:51.383945 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" event={"ID":"1c737afe-a2ad-4075-acd6-9f73aada0e4b","Type":"ContainerDied","Data":"8070ffff0d68dc11586cc4bdbf539020f6756380dd8f4480fc2534e1e0554f8a"} Jan 21 11:13:52 crc kubenswrapper[4881]: I0121 11:13:52.400363 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerID="0af488c99970619180b117b8819887b079f89bce6ab51b9ed22ffb3bcb2ad111" exitCode=0 Jan 21 11:13:52 crc kubenswrapper[4881]: I0121 11:13:52.400418 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" event={"ID":"1c737afe-a2ad-4075-acd6-9f73aada0e4b","Type":"ContainerDied","Data":"0af488c99970619180b117b8819887b079f89bce6ab51b9ed22ffb3bcb2ad111"} Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.690068 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.845167 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle\") pod \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.845397 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxcp4\" (UniqueName: \"kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4\") pod \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.845501 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util\") pod \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.846251 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle" (OuterVolumeSpecName: "bundle") pod "1c737afe-a2ad-4075-acd6-9f73aada0e4b" (UID: "1c737afe-a2ad-4075-acd6-9f73aada0e4b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.851527 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4" (OuterVolumeSpecName: "kube-api-access-lxcp4") pod "1c737afe-a2ad-4075-acd6-9f73aada0e4b" (UID: "1c737afe-a2ad-4075-acd6-9f73aada0e4b"). InnerVolumeSpecName "kube-api-access-lxcp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.859750 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util" (OuterVolumeSpecName: "util") pod "1c737afe-a2ad-4075-acd6-9f73aada0e4b" (UID: "1c737afe-a2ad-4075-acd6-9f73aada0e4b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.946828 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxcp4\" (UniqueName: \"kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.946934 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.946948 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:54 crc kubenswrapper[4881]: I0121 11:13:54.416945 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" event={"ID":"1c737afe-a2ad-4075-acd6-9f73aada0e4b","Type":"ContainerDied","Data":"32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010"} Jan 21 11:13:54 crc kubenswrapper[4881]: I0121 11:13:54.417020 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010" Jan 21 11:13:54 crc kubenswrapper[4881]: I0121 11:13:54.417024 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.642157 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6"] Jan 21 11:13:58 crc kubenswrapper[4881]: E0121 11:13:58.642844 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="extract" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.642856 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="extract" Jan 21 11:13:58 crc kubenswrapper[4881]: E0121 11:13:58.642877 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="pull" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.642883 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="pull" Jan 21 11:13:58 crc kubenswrapper[4881]: E0121 11:13:58.642892 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="util" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.642898 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="util" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.643024 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="extract" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.643475 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.646679 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8fwv9" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.682940 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6"] Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.863232 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsnnn\" (UniqueName: \"kubernetes.io/projected/3a9a96af-4c4b-45b4-ade0-688a9029cf7b-kube-api-access-jsnnn\") pod \"openstack-operator-controller-init-766b56994f-7hsc6\" (UID: \"3a9a96af-4c4b-45b4-ade0-688a9029cf7b\") " pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.964602 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsnnn\" (UniqueName: \"kubernetes.io/projected/3a9a96af-4c4b-45b4-ade0-688a9029cf7b-kube-api-access-jsnnn\") pod \"openstack-operator-controller-init-766b56994f-7hsc6\" (UID: \"3a9a96af-4c4b-45b4-ade0-688a9029cf7b\") " pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.990413 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsnnn\" (UniqueName: \"kubernetes.io/projected/3a9a96af-4c4b-45b4-ade0-688a9029cf7b-kube-api-access-jsnnn\") pod \"openstack-operator-controller-init-766b56994f-7hsc6\" (UID: \"3a9a96af-4c4b-45b4-ade0-688a9029cf7b\") " pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:13:59 crc kubenswrapper[4881]: I0121 11:13:59.262905 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:13:59 crc kubenswrapper[4881]: I0121 11:13:59.530731 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6"] Jan 21 11:13:59 crc kubenswrapper[4881]: I0121 11:13:59.851581 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:13:59 crc kubenswrapper[4881]: I0121 11:13:59.851966 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:14:00 crc kubenswrapper[4881]: I0121 11:14:00.462983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" event={"ID":"3a9a96af-4c4b-45b4-ade0-688a9029cf7b","Type":"ContainerStarted","Data":"c3ec15dca0760e651b670417bc72a856967a47424d614b936250fcd519b604ec"} Jan 21 11:14:08 crc kubenswrapper[4881]: I0121 11:14:08.624542 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" event={"ID":"3a9a96af-4c4b-45b4-ade0-688a9029cf7b","Type":"ContainerStarted","Data":"31e53cf03fd9750f0bc0a32053b62a45c1194acd86a68c42b68e667efc242a89"} Jan 21 11:14:08 crc kubenswrapper[4881]: I0121 11:14:08.625286 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:14:08 crc kubenswrapper[4881]: I0121 11:14:08.675026 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" podStartSLOduration=3.074652435 podStartE2EDuration="10.675007357s" podCreationTimestamp="2026-01-21 11:13:58 +0000 UTC" firstStartedPulling="2026-01-21 11:13:59.552088025 +0000 UTC m=+1026.812044494" lastFinishedPulling="2026-01-21 11:14:07.152442947 +0000 UTC m=+1034.412399416" observedRunningTime="2026-01-21 11:14:08.67351108 +0000 UTC m=+1035.933467549" watchObservedRunningTime="2026-01-21 11:14:08.675007357 +0000 UTC m=+1035.934963826" Jan 21 11:14:19 crc kubenswrapper[4881]: I0121 11:14:19.266592 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:14:29 crc kubenswrapper[4881]: I0121 11:14:29.963439 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:14:29 crc kubenswrapper[4881]: I0121 11:14:29.964023 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:14:29 crc kubenswrapper[4881]: I0121 11:14:29.987779 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:14:29 crc kubenswrapper[4881]: I0121 11:14:29.988532 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:14:29 crc kubenswrapper[4881]: I0121 11:14:29.988599 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5" gracePeriod=600 Jan 21 11:14:31 crc kubenswrapper[4881]: I0121 11:14:31.206410 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5" exitCode=0 Jan 21 11:14:31 crc kubenswrapper[4881]: I0121 11:14:31.206527 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5"} Jan 21 11:14:31 crc kubenswrapper[4881]: I0121 11:14:31.206805 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82"} Jan 21 11:14:31 crc kubenswrapper[4881]: I0121 11:14:31.206841 4881 scope.go:117] "RemoveContainer" containerID="c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16" Jan 21 11:14:39 crc kubenswrapper[4881]: I0121 11:14:39.965478 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w"] Jan 21 11:14:39 crc kubenswrapper[4881]: I0121 11:14:39.967277 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:14:39 crc kubenswrapper[4881]: I0121 11:14:39.969749 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-njf4m" Jan 21 11:14:39 crc kubenswrapper[4881]: I0121 11:14:39.978927 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:39.992537 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:39.993711 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:39.998385 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rzgzl" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.011826 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-4wmln"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.012751 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.016685 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-f8629" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.021646 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.022967 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.024519 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-58vbs" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.100101 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.101621 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.102847 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.103676 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.109050 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.110180 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.112331 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9ktfq" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.112557 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-b77kh" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.117999 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.142251 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-m6lch" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.161630 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvpzz\" (UniqueName: \"kubernetes.io/projected/36e5ddfe-67a4-4721-9ef5-b9459c64bf5c-kube-api-access-zvpzz\") pod \"designate-operator-controller-manager-9f958b845-4wmln\" (UID: \"36e5ddfe-67a4-4721-9ef5-b9459c64bf5c\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.166246 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7p2p\" (UniqueName: \"kubernetes.io/projected/1f795f92-d385-49bc-bc91-5109734f4d5a-kube-api-access-n7p2p\") pod \"glance-operator-controller-manager-c6994669c-jv7cr\" (UID: \"1f795f92-d385-49bc-bc91-5109734f4d5a\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.190069 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znqn9\" (UniqueName: \"kubernetes.io/projected/848fd8db-3bd5-4e22-96ca-f69b181e48be-kube-api-access-znqn9\") pod \"barbican-operator-controller-manager-7ddb5c749-svq8w\" (UID: \"848fd8db-3bd5-4e22-96ca-f69b181e48be\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.200261 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z9wn\" (UniqueName: \"kubernetes.io/projected/a028dcae-6b9d-414d-8bab-652f301de541-kube-api-access-8z9wn\") pod \"cinder-operator-controller-manager-9b68f5989-7qgck\" (UID: \"a028dcae-6b9d-414d-8bab-652f301de541\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.249837 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.251402 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.262392 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.263562 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.269568 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8ghks" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.269726 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-5zkmj" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.294141 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.295426 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.303060 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-4wmln"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.303515 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-d5s42" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.321590 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.330191 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341116 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7p2p\" (UniqueName: \"kubernetes.io/projected/1f795f92-d385-49bc-bc91-5109734f4d5a-kube-api-access-n7p2p\") pod \"glance-operator-controller-manager-c6994669c-jv7cr\" (UID: \"1f795f92-d385-49bc-bc91-5109734f4d5a\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341194 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znqn9\" (UniqueName: \"kubernetes.io/projected/848fd8db-3bd5-4e22-96ca-f69b181e48be-kube-api-access-znqn9\") pod \"barbican-operator-controller-manager-7ddb5c749-svq8w\" (UID: \"848fd8db-3bd5-4e22-96ca-f69b181e48be\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341238 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9tk\" (UniqueName: \"kubernetes.io/projected/bb9b2c3f-4f68-44fc-addf-2cf4421be015-kube-api-access-jw9tk\") pod \"horizon-operator-controller-manager-77d5c5b54f-bv8wz\" (UID: \"bb9b2c3f-4f68-44fc-addf-2cf4421be015\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341266 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z9wn\" (UniqueName: \"kubernetes.io/projected/a028dcae-6b9d-414d-8bab-652f301de541-kube-api-access-8z9wn\") pod \"cinder-operator-controller-manager-9b68f5989-7qgck\" (UID: \"a028dcae-6b9d-414d-8bab-652f301de541\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341339 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w757\" (UniqueName: \"kubernetes.io/projected/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-kube-api-access-2w757\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341365 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csk5g\" (UniqueName: \"kubernetes.io/projected/efb259b7-934f-4bc3-b502-633472d1a1c5-kube-api-access-csk5g\") pod \"heat-operator-controller-manager-594c8c9d5d-zmgll\" (UID: \"efb259b7-934f-4bc3-b502-633472d1a1c5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341395 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvpzz\" (UniqueName: \"kubernetes.io/projected/36e5ddfe-67a4-4721-9ef5-b9459c64bf5c-kube-api-access-zvpzz\") pod \"designate-operator-controller-manager-9f958b845-4wmln\" (UID: \"36e5ddfe-67a4-4721-9ef5-b9459c64bf5c\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.350502 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.359741 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.372413 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.374529 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7p2p\" (UniqueName: \"kubernetes.io/projected/1f795f92-d385-49bc-bc91-5109734f4d5a-kube-api-access-n7p2p\") pod \"glance-operator-controller-manager-c6994669c-jv7cr\" (UID: \"1f795f92-d385-49bc-bc91-5109734f4d5a\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.386399 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.390552 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.391316 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvpzz\" (UniqueName: \"kubernetes.io/projected/36e5ddfe-67a4-4721-9ef5-b9459c64bf5c-kube-api-access-zvpzz\") pod \"designate-operator-controller-manager-9f958b845-4wmln\" (UID: \"36e5ddfe-67a4-4721-9ef5-b9459c64bf5c\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.394618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znqn9\" (UniqueName: \"kubernetes.io/projected/848fd8db-3bd5-4e22-96ca-f69b181e48be-kube-api-access-znqn9\") pod \"barbican-operator-controller-manager-7ddb5c749-svq8w\" (UID: \"848fd8db-3bd5-4e22-96ca-f69b181e48be\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.399451 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.400034 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z9wn\" (UniqueName: \"kubernetes.io/projected/a028dcae-6b9d-414d-8bab-652f301de541-kube-api-access-8z9wn\") pod \"cinder-operator-controller-manager-9b68f5989-7qgck\" (UID: \"a028dcae-6b9d-414d-8bab-652f301de541\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.400435 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.413869 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.416317 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.417377 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.417920 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-g26mn" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.421246 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dklr8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq74t\" (UniqueName: \"kubernetes.io/projected/b72b2323-5329-4145-9cee-b447d9e2a304-kube-api-access-wq74t\") pod \"manila-operator-controller-manager-864f6b75bf-h6dr4\" (UID: \"b72b2323-5329-4145-9cee-b447d9e2a304\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443424 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcnf6\" (UniqueName: \"kubernetes.io/projected/ba9a1249-fc58-4809-a472-d199afa9b52b-kube-api-access-pcnf6\") pod \"keystone-operator-controller-manager-767fdc4f47-9zp7h\" (UID: \"ba9a1249-fc58-4809-a472-d199afa9b52b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443489 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9tk\" (UniqueName: \"kubernetes.io/projected/bb9b2c3f-4f68-44fc-addf-2cf4421be015-kube-api-access-jw9tk\") pod \"horizon-operator-controller-manager-77d5c5b54f-bv8wz\" (UID: \"bb9b2c3f-4f68-44fc-addf-2cf4421be015\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443517 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r2tn\" (UniqueName: \"kubernetes.io/projected/d0cafd1d-5f37-499a-a531-547a137aae21-kube-api-access-8r2tn\") pod \"ironic-operator-controller-manager-78757b4889-5qcms\" (UID: \"d0cafd1d-5f37-499a-a531-547a137aae21\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443685 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w757\" (UniqueName: \"kubernetes.io/projected/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-kube-api-access-2w757\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443717 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csk5g\" (UniqueName: \"kubernetes.io/projected/efb259b7-934f-4bc3-b502-633472d1a1c5-kube-api-access-csk5g\") pod \"heat-operator-controller-manager-594c8c9d5d-zmgll\" (UID: \"efb259b7-934f-4bc3-b502-633472d1a1c5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443763 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.443986 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.444060 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:14:40.944036238 +0000 UTC m=+1068.203992697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.446403 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.460547 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.477610 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9tk\" (UniqueName: \"kubernetes.io/projected/bb9b2c3f-4f68-44fc-addf-2cf4421be015-kube-api-access-jw9tk\") pod \"horizon-operator-controller-manager-77d5c5b54f-bv8wz\" (UID: \"bb9b2c3f-4f68-44fc-addf-2cf4421be015\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.484017 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-798zt"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.485283 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.487370 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csk5g\" (UniqueName: \"kubernetes.io/projected/efb259b7-934f-4bc3-b502-633472d1a1c5-kube-api-access-csk5g\") pod \"heat-operator-controller-manager-594c8c9d5d-zmgll\" (UID: \"efb259b7-934f-4bc3-b502-633472d1a1c5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.488928 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bqsg6" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.501050 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w757\" (UniqueName: \"kubernetes.io/projected/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-kube-api-access-2w757\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.526588 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.527944 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.529824 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-m9p9v" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.545509 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcnf6\" (UniqueName: \"kubernetes.io/projected/ba9a1249-fc58-4809-a472-d199afa9b52b-kube-api-access-pcnf6\") pod \"keystone-operator-controller-manager-767fdc4f47-9zp7h\" (UID: \"ba9a1249-fc58-4809-a472-d199afa9b52b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.545563 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r2tn\" (UniqueName: \"kubernetes.io/projected/d0cafd1d-5f37-499a-a531-547a137aae21-kube-api-access-8r2tn\") pod \"ironic-operator-controller-manager-78757b4889-5qcms\" (UID: \"d0cafd1d-5f37-499a-a531-547a137aae21\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.545604 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt66j\" (UniqueName: \"kubernetes.io/projected/c3b86204-5389-4b6a-bd45-fb6ee23b784e-kube-api-access-zt66j\") pod \"neutron-operator-controller-manager-cb4666565-ncnww\" (UID: \"c3b86204-5389-4b6a-bd45-fb6ee23b784e\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.545661 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2qhv\" (UniqueName: \"kubernetes.io/projected/4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f-kube-api-access-s2qhv\") pod \"mariadb-operator-controller-manager-c87fff755-s6gm8\" (UID: \"4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.545724 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq74t\" (UniqueName: \"kubernetes.io/projected/b72b2323-5329-4145-9cee-b447d9e2a304-kube-api-access-wq74t\") pod \"manila-operator-controller-manager-864f6b75bf-h6dr4\" (UID: \"b72b2323-5329-4145-9cee-b447d9e2a304\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.560089 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-798zt"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.570556 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq74t\" (UniqueName: \"kubernetes.io/projected/b72b2323-5329-4145-9cee-b447d9e2a304-kube-api-access-wq74t\") pod \"manila-operator-controller-manager-864f6b75bf-h6dr4\" (UID: \"b72b2323-5329-4145-9cee-b447d9e2a304\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.571714 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcnf6\" (UniqueName: \"kubernetes.io/projected/ba9a1249-fc58-4809-a472-d199afa9b52b-kube-api-access-pcnf6\") pod \"keystone-operator-controller-manager-767fdc4f47-9zp7h\" (UID: \"ba9a1249-fc58-4809-a472-d199afa9b52b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.572546 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.583090 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r2tn\" (UniqueName: \"kubernetes.io/projected/d0cafd1d-5f37-499a-a531-547a137aae21-kube-api-access-8r2tn\") pod \"ironic-operator-controller-manager-78757b4889-5qcms\" (UID: \"d0cafd1d-5f37-499a-a531-547a137aae21\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.595278 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.601352 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.602396 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.602465 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.610200 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.610335 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fzcfv" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.619037 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.622293 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.624735 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.626024 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.627903 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-872n6" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.631066 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.632295 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.635191 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7h6dm" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.635941 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.638633 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.639366 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.641067 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-j9ww2" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.645739 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.647333 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt66j\" (UniqueName: \"kubernetes.io/projected/c3b86204-5389-4b6a-bd45-fb6ee23b784e-kube-api-access-zt66j\") pod \"neutron-operator-controller-manager-cb4666565-ncnww\" (UID: \"c3b86204-5389-4b6a-bd45-fb6ee23b784e\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.647395 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2qhv\" (UniqueName: \"kubernetes.io/projected/4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f-kube-api-access-s2qhv\") pod \"mariadb-operator-controller-manager-c87fff755-s6gm8\" (UID: \"4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.647467 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fmk\" (UniqueName: \"kubernetes.io/projected/761a1a49-e01e-4674-b1f4-da732e1def98-kube-api-access-96fmk\") pod \"nova-operator-controller-manager-65849867d6-798zt\" (UID: \"761a1a49-e01e-4674-b1f4-da732e1def98\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.647496 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjc2\" (UniqueName: \"kubernetes.io/projected/340257c4-9218-49b0-8a75-b2a4e0231fe3-kube-api-access-nhjc2\") pod \"octavia-operator-controller-manager-7fc9b76cf6-n7kgd\" (UID: \"340257c4-9218-49b0-8a75-b2a4e0231fe3\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.666031 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.697830 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt66j\" (UniqueName: \"kubernetes.io/projected/c3b86204-5389-4b6a-bd45-fb6ee23b784e-kube-api-access-zt66j\") pod \"neutron-operator-controller-manager-cb4666565-ncnww\" (UID: \"c3b86204-5389-4b6a-bd45-fb6ee23b784e\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.706833 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.713776 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2qhv\" (UniqueName: \"kubernetes.io/projected/4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f-kube-api-access-s2qhv\") pod \"mariadb-operator-controller-manager-c87fff755-s6gm8\" (UID: \"4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.741080 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.741847 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.750563 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751001 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b5pw\" (UniqueName: \"kubernetes.io/projected/50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb-kube-api-access-9b5pw\") pod \"ovn-operator-controller-manager-55db956ddc-vpqw4\" (UID: \"50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751044 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvxc\" (UniqueName: \"kubernetes.io/projected/8c504afd-e4e0-4676-b292-b569b638a7dd-kube-api-access-dvvxc\") pod \"swift-operator-controller-manager-85dd56d4cc-rk8l8\" (UID: \"8c504afd-e4e0-4676-b292-b569b638a7dd\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751097 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4wn\" (UniqueName: \"kubernetes.io/projected/b1b17be2-e382-4916-8e53-a68c85b5bfc2-kube-api-access-7x4wn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751153 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96fmk\" (UniqueName: \"kubernetes.io/projected/761a1a49-e01e-4674-b1f4-da732e1def98-kube-api-access-96fmk\") pod \"nova-operator-controller-manager-65849867d6-798zt\" (UID: \"761a1a49-e01e-4674-b1f4-da732e1def98\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjc2\" (UniqueName: \"kubernetes.io/projected/340257c4-9218-49b0-8a75-b2a4e0231fe3-kube-api-access-nhjc2\") pod \"octavia-operator-controller-manager-7fc9b76cf6-n7kgd\" (UID: \"340257c4-9218-49b0-8a75-b2a4e0231fe3\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751317 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6hzs\" (UniqueName: \"kubernetes.io/projected/e8e6f423-a07b-4a22-9e39-efa8de22747e-kube-api-access-p6hzs\") pod \"placement-operator-controller-manager-686df47fcb-jh4z9\" (UID: \"e8e6f423-a07b-4a22-9e39-efa8de22747e\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.785203 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjc2\" (UniqueName: \"kubernetes.io/projected/340257c4-9218-49b0-8a75-b2a4e0231fe3-kube-api-access-nhjc2\") pod \"octavia-operator-controller-manager-7fc9b76cf6-n7kgd\" (UID: \"340257c4-9218-49b0-8a75-b2a4e0231fe3\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.793212 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96fmk\" (UniqueName: \"kubernetes.io/projected/761a1a49-e01e-4674-b1f4-da732e1def98-kube-api-access-96fmk\") pod \"nova-operator-controller-manager-65849867d6-798zt\" (UID: \"761a1a49-e01e-4674-b1f4-da732e1def98\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.803344 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.820572 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.826712 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.830565 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.837510 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gjcsh" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.849908 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.853336 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b5pw\" (UniqueName: \"kubernetes.io/projected/50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb-kube-api-access-9b5pw\") pod \"ovn-operator-controller-manager-55db956ddc-vpqw4\" (UID: \"50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.853388 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvxc\" (UniqueName: \"kubernetes.io/projected/8c504afd-e4e0-4676-b292-b569b638a7dd-kube-api-access-dvvxc\") pod \"swift-operator-controller-manager-85dd56d4cc-rk8l8\" (UID: \"8c504afd-e4e0-4676-b292-b569b638a7dd\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.853413 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4wn\" (UniqueName: \"kubernetes.io/projected/b1b17be2-e382-4916-8e53-a68c85b5bfc2-kube-api-access-7x4wn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.853490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6hzs\" (UniqueName: \"kubernetes.io/projected/e8e6f423-a07b-4a22-9e39-efa8de22747e-kube-api-access-p6hzs\") pod \"placement-operator-controller-manager-686df47fcb-jh4z9\" (UID: \"e8e6f423-a07b-4a22-9e39-efa8de22747e\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.853542 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.853695 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.853773 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert podName:b1b17be2-e382-4916-8e53-a68c85b5bfc2 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:41.35375345 +0000 UTC m=+1068.613709919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544795q" (UID: "b1b17be2-e382-4916-8e53-a68c85b5bfc2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.856187 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.857982 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.858094 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.865497 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.874846 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.883238 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.883797 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.884820 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.905032 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.907566 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.926661 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.955106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxbkl\" (UniqueName: \"kubernetes.io/projected/2aac430e-3ac8-4624-8575-66386b5c2df3-kube-api-access-pxbkl\") pod \"test-operator-controller-manager-7cd8bc9dbb-tttcz\" (UID: \"2aac430e-3ac8-4624-8575-66386b5c2df3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.955640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.955894 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8mtz\" (UniqueName: \"kubernetes.io/projected/1cebbaaf-6189-409a-8f25-43d7fac77f95-kube-api-access-j8mtz\") pod \"watcher-operator-controller-manager-849fd9b886-k9t7q\" (UID: \"1cebbaaf-6189-409a-8f25-43d7fac77f95\") " pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.956080 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5x4\" (UniqueName: \"kubernetes.io/projected/55ce5ee6-47f4-4874-92dc-6ab78f2ce212-kube-api-access-nm5x4\") pod \"telemetry-operator-controller-manager-5f8f495fcf-fcht4\" (UID: \"55ce5ee6-47f4-4874-92dc-6ab78f2ce212\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.956523 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.956806 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:14:41.956762624 +0000 UTC m=+1069.216719283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.976502 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.979202 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.004973 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8"] Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.023024 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc"] Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.023917 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc"] Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.023996 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.075369 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.076345 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-t9k6g" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.079200 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.081284 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4wn\" (UniqueName: \"kubernetes.io/projected/b1b17be2-e382-4916-8e53-a68c85b5bfc2-kube-api-access-7x4wn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.083280 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6hzs\" (UniqueName: \"kubernetes.io/projected/e8e6f423-a07b-4a22-9e39-efa8de22747e-kube-api-access-p6hzs\") pod \"placement-operator-controller-manager-686df47fcb-jh4z9\" (UID: \"e8e6f423-a07b-4a22-9e39-efa8de22747e\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.086908 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b5pw\" (UniqueName: \"kubernetes.io/projected/50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb-kube-api-access-9b5pw\") pod \"ovn-operator-controller-manager-55db956ddc-vpqw4\" (UID: \"50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.089139 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jqcjd" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.089566 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-s4m4r" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.089922 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zjv4z" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.091031 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.091258 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxbkl\" (UniqueName: \"kubernetes.io/projected/2aac430e-3ac8-4624-8575-66386b5c2df3-kube-api-access-pxbkl\") pod \"test-operator-controller-manager-7cd8bc9dbb-tttcz\" (UID: \"2aac430e-3ac8-4624-8575-66386b5c2df3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.091395 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfj8h\" (UniqueName: \"kubernetes.io/projected/8c8feeec-377c-499a-b666-895010f8ebeb-kube-api-access-jfj8h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-76qxc\" (UID: \"8c8feeec-377c-499a-b666-895010f8ebeb\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.091605 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8mtz\" (UniqueName: \"kubernetes.io/projected/1cebbaaf-6189-409a-8f25-43d7fac77f95-kube-api-access-j8mtz\") pod \"watcher-operator-controller-manager-849fd9b886-k9t7q\" (UID: \"1cebbaaf-6189-409a-8f25-43d7fac77f95\") " pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.091932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm5x4\" (UniqueName: \"kubernetes.io/projected/55ce5ee6-47f4-4874-92dc-6ab78f2ce212-kube-api-access-nm5x4\") pod \"telemetry-operator-controller-manager-5f8f495fcf-fcht4\" (UID: \"55ce5ee6-47f4-4874-92dc-6ab78f2ce212\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.092134 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67nts\" (UniqueName: \"kubernetes.io/projected/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-kube-api-access-67nts\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.092345 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.113304 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvxc\" (UniqueName: \"kubernetes.io/projected/8c504afd-e4e0-4676-b292-b569b638a7dd-kube-api-access-dvvxc\") pod \"swift-operator-controller-manager-85dd56d4cc-rk8l8\" (UID: \"8c504afd-e4e0-4676-b292-b569b638a7dd\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.145697 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm5x4\" (UniqueName: \"kubernetes.io/projected/55ce5ee6-47f4-4874-92dc-6ab78f2ce212-kube-api-access-nm5x4\") pod \"telemetry-operator-controller-manager-5f8f495fcf-fcht4\" (UID: \"55ce5ee6-47f4-4874-92dc-6ab78f2ce212\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.146710 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxbkl\" (UniqueName: \"kubernetes.io/projected/2aac430e-3ac8-4624-8575-66386b5c2df3-kube-api-access-pxbkl\") pod \"test-operator-controller-manager-7cd8bc9dbb-tttcz\" (UID: \"2aac430e-3ac8-4624-8575-66386b5c2df3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.147842 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8mtz\" (UniqueName: \"kubernetes.io/projected/1cebbaaf-6189-409a-8f25-43d7fac77f95-kube-api-access-j8mtz\") pod \"watcher-operator-controller-manager-849fd9b886-k9t7q\" (UID: \"1cebbaaf-6189-409a-8f25-43d7fac77f95\") " pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.196250 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.227461 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.227515 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.227538 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfj8h\" (UniqueName: \"kubernetes.io/projected/8c8feeec-377c-499a-b666-895010f8ebeb-kube-api-access-jfj8h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-76qxc\" (UID: \"8c8feeec-377c-499a-b666-895010f8ebeb\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.227598 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67nts\" (UniqueName: \"kubernetes.io/projected/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-kube-api-access-67nts\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.228050 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.228103 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:41.72808383 +0000 UTC m=+1068.988040299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.228265 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.228294 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:41.728284875 +0000 UTC m=+1068.988241354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "metrics-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.261669 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.322086 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.325750 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.361082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.361980 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.382507 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert podName:b1b17be2-e382-4916-8e53-a68c85b5bfc2 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:42.382457674 +0000 UTC m=+1069.642414153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544795q" (UID: "b1b17be2-e382-4916-8e53-a68c85b5bfc2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.393279 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67nts\" (UniqueName: \"kubernetes.io/projected/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-kube-api-access-67nts\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.395435 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.396336 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfj8h\" (UniqueName: \"kubernetes.io/projected/8c8feeec-377c-499a-b666-895010f8ebeb-kube-api-access-jfj8h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-76qxc\" (UID: \"8c8feeec-377c-499a-b666-895010f8ebeb\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.415732 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.664886 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.767604 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.767762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.768137 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.768232 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:42.768202159 +0000 UTC m=+1070.028158628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "metrics-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.768818 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.768945 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:42.768912616 +0000 UTC m=+1070.028869085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.970541 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.970717 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.970765 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:14:43.970750841 +0000 UTC m=+1071.230707310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: I0121 11:14:42.402637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.404843 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.405231 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert podName:b1b17be2-e382-4916-8e53-a68c85b5bfc2 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:44.405202449 +0000 UTC m=+1071.665159068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544795q" (UID: "b1b17be2-e382-4916-8e53-a68c85b5bfc2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: I0121 11:14:42.859129 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:42 crc kubenswrapper[4881]: I0121 11:14:42.859238 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.859426 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.859488 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:44.85946693 +0000 UTC m=+1072.119423399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "metrics-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.860073 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.860125 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:44.860108066 +0000 UTC m=+1072.120064535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.053764 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.060331 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.062610 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.062876 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.062953 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:14:48.062930616 +0000 UTC m=+1075.322887085 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.069223 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.076564 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.088532 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod340257c4_9218_49b0_8a75_b2a4e0231fe3.slice/crio-705bc8c9961f2a159bfd5194f6f035adc5ac923dbc26dd216480b551db77a558 WatchSource:0}: Error finding container 705bc8c9961f2a159bfd5194f6f035adc5ac923dbc26dd216480b551db77a558: Status 404 returned error can't find the container with id 705bc8c9961f2a159bfd5194f6f035adc5ac923dbc26dd216480b551db77a558 Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.091193 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda028dcae_6b9d_414d_8bab_652f301de541.slice/crio-34e07c33fca9996b71aec285847fc0e1b6313856e5811d2b7e23d11c855ced9a WatchSource:0}: Error finding container 34e07c33fca9996b71aec285847fc0e1b6313856e5811d2b7e23d11c855ced9a: Status 404 returned error can't find the container with id 34e07c33fca9996b71aec285847fc0e1b6313856e5811d2b7e23d11c855ced9a Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.125107 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.136188 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-4wmln"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.155862 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.167436 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.180748 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.191562 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.203191 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.220830 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.233115 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc"] Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.242056 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfj8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-76qxc_openstack-operators(8c8feeec-377c-499a-b666-895010f8ebeb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.241918 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wq74t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-h6dr4_openstack-operators(b72b2323-5329-4145-9cee-b447d9e2a304): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.242736 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zt66j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-ncnww_openstack-operators(c3b86204-5389-4b6a-bd45-fb6ee23b784e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.243163 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podUID="8c8feeec-377c-499a-b666-895010f8ebeb" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.243340 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podUID="b72b2323-5329-4145-9cee-b447d9e2a304" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.244544 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" podUID="c3b86204-5389-4b6a-bd45-fb6ee23b784e" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.246559 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.452812 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.469549 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.477192 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9"] Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.488098 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0cafd1d_5f37_499a_a531_547a137aae21.slice/crio-9b78972310c9556c8896a8e1905d8f1256dfa1c5257d16aff20e8e756d472a4c WatchSource:0}: Error finding container 9b78972310c9556c8896a8e1905d8f1256dfa1c5257d16aff20e8e756d472a4c: Status 404 returned error can't find the container with id 9b78972310c9556c8896a8e1905d8f1256dfa1c5257d16aff20e8e756d472a4c Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.488267 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.490529 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.490730 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.490817 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert podName:b1b17be2-e382-4916-8e53-a68c85b5bfc2 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:48.490792818 +0000 UTC m=+1075.750749287 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544795q" (UID: "b1b17be2-e382-4916-8e53-a68c85b5bfc2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.497410 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8e6f423_a07b_4a22_9e39_efa8de22747e.slice/crio-ef05d38ff266728a64eb1d01c6a0ea065a58968faf1ec7d3ee5aed5432d604a4 WatchSource:0}: Error finding container ef05d38ff266728a64eb1d01c6a0ea065a58968faf1ec7d3ee5aed5432d604a4: Status 404 returned error can't find the container with id ef05d38ff266728a64eb1d01c6a0ea065a58968faf1ec7d3ee5aed5432d604a4 Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.498477 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55ce5ee6_47f4_4874_92dc_6ab78f2ce212.slice/crio-c6889bb0a1437b385995f9935900046a8b7e40d8e117c7cf186721da4929aed4 WatchSource:0}: Error finding container c6889bb0a1437b385995f9935900046a8b7e40d8e117c7cf186721da4929aed4: Status 404 returned error can't find the container with id c6889bb0a1437b385995f9935900046a8b7e40d8e117c7cf186721da4929aed4 Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.500416 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6hzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-jh4z9_openstack-operators(e8e6f423-a07b-4a22-9e39-efa8de22747e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.500579 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4"] Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.501844 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" podUID="e8e6f423-a07b-4a22-9e39-efa8de22747e" Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.502965 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c504afd_e4e0_4676_b292_b569b638a7dd.slice/crio-1b839dbf4409e9315a7364f6fb7c43674c64cedc21438656b9e761c61a2ba388 WatchSource:0}: Error finding container 1b839dbf4409e9315a7364f6fb7c43674c64cedc21438656b9e761c61a2ba388: Status 404 returned error can't find the container with id 1b839dbf4409e9315a7364f6fb7c43674c64cedc21438656b9e761c61a2ba388 Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.506963 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aac430e_3ac8_4624_8575_66386b5c2df3.slice/crio-62bdcc15a65f1ed35c94ec3dea6a3c543fa7b28dd41b1fdfa362c736c28501c4 WatchSource:0}: Error finding container 62bdcc15a65f1ed35c94ec3dea6a3c543fa7b28dd41b1fdfa362c736c28501c4: Status 404 returned error can't find the container with id 62bdcc15a65f1ed35c94ec3dea6a3c543fa7b28dd41b1fdfa362c736c28501c4 Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.509085 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nm5x4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-fcht4_openstack-operators(55ce5ee6-47f4-4874-92dc-6ab78f2ce212): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.510215 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz"] Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.510286 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podUID="55ce5ee6-47f4-4874-92dc-6ab78f2ce212" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.513323 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pxbkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-tttcz_openstack-operators(2aac430e-3ac8-4624-8575-66386b5c2df3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.513633 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvvxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-rk8l8_openstack-operators(8c504afd-e4e0-4676-b292-b569b638a7dd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.514443 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podUID="2aac430e-3ac8-4624-8575-66386b5c2df3" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.515384 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" podUID="8c504afd-e4e0-4676-b292-b569b638a7dd" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.517350 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-798zt"] Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.535576 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96fmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-798zt_openstack-operators(761a1a49-e01e-4674-b1f4-da732e1def98): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.536800 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podUID="761a1a49-e01e-4674-b1f4-da732e1def98" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.670054 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" event={"ID":"848fd8db-3bd5-4e22-96ca-f69b181e48be","Type":"ContainerStarted","Data":"d523e709afe6be547fb9649a5bbc2cdef91edff360388c92c5a2498105b386be"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.671269 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" event={"ID":"2aac430e-3ac8-4624-8575-66386b5c2df3","Type":"ContainerStarted","Data":"62bdcc15a65f1ed35c94ec3dea6a3c543fa7b28dd41b1fdfa362c736c28501c4"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.672838 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podUID="2aac430e-3ac8-4624-8575-66386b5c2df3" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.674387 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" event={"ID":"36e5ddfe-67a4-4721-9ef5-b9459c64bf5c","Type":"ContainerStarted","Data":"a3bf9d1f7f2a3f7faa4275cef20669af63558cfc9bb35df5469246cc5d68128e"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.676435 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" event={"ID":"a028dcae-6b9d-414d-8bab-652f301de541","Type":"ContainerStarted","Data":"34e07c33fca9996b71aec285847fc0e1b6313856e5811d2b7e23d11c855ced9a"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.678091 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" event={"ID":"1cebbaaf-6189-409a-8f25-43d7fac77f95","Type":"ContainerStarted","Data":"7a72c1d78ee332762b08b248316b0a5d30c3a405c177d37bf03da637118e6401"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.681456 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" event={"ID":"b72b2323-5329-4145-9cee-b447d9e2a304","Type":"ContainerStarted","Data":"415c3a374607aa36d534fe15022f92cc1c7b8964bc9b8c3dd1323eefbb92219c"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.683058 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podUID="b72b2323-5329-4145-9cee-b447d9e2a304" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.684410 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" event={"ID":"e8e6f423-a07b-4a22-9e39-efa8de22747e","Type":"ContainerStarted","Data":"ef05d38ff266728a64eb1d01c6a0ea065a58968faf1ec7d3ee5aed5432d604a4"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.685768 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" podUID="e8e6f423-a07b-4a22-9e39-efa8de22747e" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.686871 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" event={"ID":"8c504afd-e4e0-4676-b292-b569b638a7dd","Type":"ContainerStarted","Data":"1b839dbf4409e9315a7364f6fb7c43674c64cedc21438656b9e761c61a2ba388"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.690044 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" podUID="8c504afd-e4e0-4676-b292-b569b638a7dd" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.693411 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" event={"ID":"4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f","Type":"ContainerStarted","Data":"4c84a19765fe7772a94a4cb6d3632ce28346afc6e594da959e1dd40376d118fd"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.696548 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" event={"ID":"1f795f92-d385-49bc-bc91-5109734f4d5a","Type":"ContainerStarted","Data":"155c3c510496af1f04966e3427bde8ad8646a8854ad7c215b148b70d32e5a151"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.698145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" event={"ID":"d0cafd1d-5f37-499a-a531-547a137aae21","Type":"ContainerStarted","Data":"9b78972310c9556c8896a8e1905d8f1256dfa1c5257d16aff20e8e756d472a4c"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.699700 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" event={"ID":"55ce5ee6-47f4-4874-92dc-6ab78f2ce212","Type":"ContainerStarted","Data":"c6889bb0a1437b385995f9935900046a8b7e40d8e117c7cf186721da4929aed4"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.702603 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podUID="55ce5ee6-47f4-4874-92dc-6ab78f2ce212" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.704492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" event={"ID":"761a1a49-e01e-4674-b1f4-da732e1def98","Type":"ContainerStarted","Data":"ee6c24e22567787582321ee023eb314186b145ce7792fd58c3ac0bb32ea68bf7"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.705893 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podUID="761a1a49-e01e-4674-b1f4-da732e1def98" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.706190 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" event={"ID":"efb259b7-934f-4bc3-b502-633472d1a1c5","Type":"ContainerStarted","Data":"261098f48f1d26ebb4c75be3cadb08b9b9c660b7de3dd29d9855066e033691d5"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.708179 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" event={"ID":"50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb","Type":"ContainerStarted","Data":"b1578a57aad395e5ece82b0c12158468c4d9f2f5120badf5d29f82f41dc71ce1"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.713707 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" event={"ID":"8c8feeec-377c-499a-b666-895010f8ebeb","Type":"ContainerStarted","Data":"fef568e9419c19adaca1121cd34af986643033aa54ba8a4f061832377e4d953b"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.715203 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podUID="8c8feeec-377c-499a-b666-895010f8ebeb" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.716769 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" event={"ID":"bb9b2c3f-4f68-44fc-addf-2cf4421be015","Type":"ContainerStarted","Data":"0ac0a28c189579319e2ae1a4cb689567f964d4d85af14aaa79d7b3610635a8bc"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.719627 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" event={"ID":"c3b86204-5389-4b6a-bd45-fb6ee23b784e","Type":"ContainerStarted","Data":"d4b01fff042e17e842cb2aba4844d1807f3e65fc3b3c4a63724b2347d70689a1"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.721671 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" podUID="c3b86204-5389-4b6a-bd45-fb6ee23b784e" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.723953 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" event={"ID":"340257c4-9218-49b0-8a75-b2a4e0231fe3","Type":"ContainerStarted","Data":"705bc8c9961f2a159bfd5194f6f035adc5ac923dbc26dd216480b551db77a558"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.726771 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" event={"ID":"ba9a1249-fc58-4809-a472-d199afa9b52b","Type":"ContainerStarted","Data":"6ed1b9a3832f10fdbf3e2449a7b2bb34f9e26dc7a228af18531748da3e06a717"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.902957 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.903245 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.903397 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.903448 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:48.903431783 +0000 UTC m=+1076.163388252 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "metrics-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.903804 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.903827 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:48.903819922 +0000 UTC m=+1076.163776391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "webhook-server-cert" not found Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746091 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podUID="55ce5ee6-47f4-4874-92dc-6ab78f2ce212" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746346 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podUID="761a1a49-e01e-4674-b1f4-da732e1def98" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746459 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podUID="b72b2323-5329-4145-9cee-b447d9e2a304" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746513 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" podUID="e8e6f423-a07b-4a22-9e39-efa8de22747e" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746602 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podUID="2aac430e-3ac8-4624-8575-66386b5c2df3" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746687 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" podUID="8c504afd-e4e0-4676-b292-b569b638a7dd" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746827 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podUID="8c8feeec-377c-499a-b666-895010f8ebeb" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.749500 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" podUID="c3b86204-5389-4b6a-bd45-fb6ee23b784e" Jan 21 11:14:48 crc kubenswrapper[4881]: I0121 11:14:48.114170 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.114521 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.114565 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:14:56.114552476 +0000 UTC m=+1083.374508935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: I0121 11:14:48.556320 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.556778 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.557291 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert podName:b1b17be2-e382-4916-8e53-a68c85b5bfc2 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:56.55726217 +0000 UTC m=+1083.817218639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544795q" (UID: "b1b17be2-e382-4916-8e53-a68c85b5bfc2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: I0121 11:14:48.962530 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:48 crc kubenswrapper[4881]: I0121 11:14:48.962630 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.962897 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.962962 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:56.96294156 +0000 UTC m=+1084.222898039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "metrics-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.963451 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.963558 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:56.963534446 +0000 UTC m=+1084.223491085 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "webhook-server-cert" not found Jan 21 11:14:56 crc kubenswrapper[4881]: I0121 11:14:56.288134 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:56 crc kubenswrapper[4881]: E0121 11:14:56.288332 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:56 crc kubenswrapper[4881]: E0121 11:14:56.289072 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:15:12.289052364 +0000 UTC m=+1099.549008843 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:56 crc kubenswrapper[4881]: I0121 11:14:56.593827 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:56 crc kubenswrapper[4881]: I0121 11:14:56.649852 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:56 crc kubenswrapper[4881]: I0121 11:14:56.839634 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fzcfv" Jan 21 11:14:56 crc kubenswrapper[4881]: I0121 11:14:56.849033 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.040976 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.041081 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.102409 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.102499 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.138543 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-t9k6g" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.147088 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:57 crc kubenswrapper[4881]: E0121 11:14:57.575358 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 21 11:14:57 crc kubenswrapper[4881]: E0121 11:14:57.575556 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9b5pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-vpqw4_openstack-operators(50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:14:57 crc kubenswrapper[4881]: E0121 11:14:57.577774 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" podUID="50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb" Jan 21 11:14:58 crc kubenswrapper[4881]: E0121 11:14:58.070383 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" podUID="50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb" Jan 21 11:14:58 crc kubenswrapper[4881]: E0121 11:14:58.408140 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729" Jan 21 11:14:58 crc kubenswrapper[4881]: E0121 11:14:58.408834 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nhjc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-n7kgd_openstack-operators(340257c4-9218-49b0-8a75-b2a4e0231fe3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:14:58 crc kubenswrapper[4881]: E0121 11:14:58.410238 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" podUID="340257c4-9218-49b0-8a75-b2a4e0231fe3" Jan 21 11:14:59 crc kubenswrapper[4881]: E0121 11:14:59.271086 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" podUID="340257c4-9218-49b0-8a75-b2a4e0231fe3" Jan 21 11:14:59 crc kubenswrapper[4881]: E0121 11:14:59.954255 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488" Jan 21 11:14:59 crc kubenswrapper[4881]: E0121 11:14:59.954487 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8z9wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-9b68f5989-7qgck_openstack-operators(a028dcae-6b9d-414d-8bab-652f301de541): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:14:59 crc kubenswrapper[4881]: E0121 11:14:59.955680 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" podUID="a028dcae-6b9d-414d-8bab-652f301de541" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.146313 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb"] Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.147283 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.150184 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.150316 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.155633 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8v5\" (UniqueName: \"kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.155866 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.155941 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.156543 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb"] Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.257421 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.257480 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.257517 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv8v5\" (UniqueName: \"kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.258992 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: E0121 11:15:00.296268 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" podUID="a028dcae-6b9d-414d-8bab-652f301de541" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.300258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.317690 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv8v5\" (UniqueName: \"kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.477845 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:01 crc kubenswrapper[4881]: E0121 11:15:01.608613 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 11:15:01 crc kubenswrapper[4881]: E0121 11:15:01.608902 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zvpzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-4wmln_openstack-operators(36e5ddfe-67a4-4721-9ef5-b9459c64bf5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:01 crc kubenswrapper[4881]: E0121 11:15:01.610854 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" podUID="36e5ddfe-67a4-4721-9ef5-b9459c64bf5c" Jan 21 11:15:02 crc kubenswrapper[4881]: E0121 11:15:02.357950 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" podUID="36e5ddfe-67a4-4721-9ef5-b9459c64bf5c" Jan 21 11:15:02 crc kubenswrapper[4881]: E0121 11:15:02.935125 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 21 11:15:02 crc kubenswrapper[4881]: E0121 11:15:02.935703 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-znqn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-svq8w_openstack-operators(848fd8db-3bd5-4e22-96ca-f69b181e48be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:02 crc kubenswrapper[4881]: E0121 11:15:02.936935 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" podUID="848fd8db-3bd5-4e22-96ca-f69b181e48be" Jan 21 11:15:03 crc kubenswrapper[4881]: E0121 11:15:03.369718 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" podUID="848fd8db-3bd5-4e22-96ca-f69b181e48be" Jan 21 11:15:06 crc kubenswrapper[4881]: E0121 11:15:06.284429 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028" Jan 21 11:15:06 crc kubenswrapper[4881]: E0121 11:15:06.284926 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n7p2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-c6994669c-jv7cr_openstack-operators(1f795f92-d385-49bc-bc91-5109734f4d5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:06 crc kubenswrapper[4881]: E0121 11:15:06.286945 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" podUID="1f795f92-d385-49bc-bc91-5109734f4d5a" Jan 21 11:15:06 crc kubenswrapper[4881]: E0121 11:15:06.393236 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028\\\"\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" podUID="1f795f92-d385-49bc-bc91-5109734f4d5a" Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.276103 4881 trace.go:236] Trace[1986326163]: "Calculate volume metrics of kube-api-access-l24bg for pod cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" (21-Jan-2026 11:15:11.021) (total time: 1254ms): Jan 21 11:15:12 crc kubenswrapper[4881]: Trace[1986326163]: [1.254933196s] [1.254933196s] END Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.277805 4881 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rslv2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.277860 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" podUID="537a87a4-8f58-441f-9199-62c5849c693c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.379276 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.385336 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.599844 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-m6lch" Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.607607 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:15:17 crc kubenswrapper[4881]: E0121 11:15:17.576404 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 21 11:15:17 crc kubenswrapper[4881]: E0121 11:15:17.577452 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pxbkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-tttcz_openstack-operators(2aac430e-3ac8-4624-8575-66386b5c2df3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:17 crc kubenswrapper[4881]: E0121 11:15:17.579423 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podUID="2aac430e-3ac8-4624-8575-66386b5c2df3" Jan 21 11:15:17 crc kubenswrapper[4881]: I0121 11:15:17.608044 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-7wxr8" podUID="6e9defc7-ad37-4742-b149-cb71d7ea177a" containerName="registry-server" probeResult="failure" output=< Jan 21 11:15:17 crc kubenswrapper[4881]: timeout: health rpc did not complete within 1s Jan 21 11:15:17 crc kubenswrapper[4881]: > Jan 21 11:15:17 crc kubenswrapper[4881]: I0121 11:15:17.609686 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-7wxr8" podUID="6e9defc7-ad37-4742-b149-cb71d7ea177a" containerName="registry-server" probeResult="failure" output=< Jan 21 11:15:17 crc kubenswrapper[4881]: timeout: health rpc did not complete within 1s Jan 21 11:15:17 crc kubenswrapper[4881]: > Jan 21 11:15:19 crc kubenswrapper[4881]: E0121 11:15:19.894856 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 11:15:19 crc kubenswrapper[4881]: E0121 11:15:19.895374 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nm5x4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-fcht4_openstack-operators(55ce5ee6-47f4-4874-92dc-6ab78f2ce212): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:19 crc kubenswrapper[4881]: E0121 11:15:19.896847 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podUID="55ce5ee6-47f4-4874-92dc-6ab78f2ce212" Jan 21 11:15:20 crc kubenswrapper[4881]: E0121 11:15:20.698912 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 11:15:20 crc kubenswrapper[4881]: E0121 11:15:20.699149 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96fmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-798zt_openstack-operators(761a1a49-e01e-4674-b1f4-da732e1def98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:20 crc kubenswrapper[4881]: E0121 11:15:20.700586 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podUID="761a1a49-e01e-4674-b1f4-da732e1def98" Jan 21 11:15:21 crc kubenswrapper[4881]: E0121 11:15:21.540727 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 21 11:15:21 crc kubenswrapper[4881]: E0121 11:15:21.541186 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wq74t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-h6dr4_openstack-operators(b72b2323-5329-4145-9cee-b447d9e2a304): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:21 crc kubenswrapper[4881]: E0121 11:15:21.543298 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podUID="b72b2323-5329-4145-9cee-b447d9e2a304" Jan 21 11:15:22 crc kubenswrapper[4881]: I0121 11:15:22.084018 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q"] Jan 21 11:15:24 crc kubenswrapper[4881]: I0121 11:15:24.896830 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" event={"ID":"b1b17be2-e382-4916-8e53-a68c85b5bfc2","Type":"ContainerStarted","Data":"0048c64a89fa99df970b415fb3ce60253d1737b9b9ec85451632d9017fdfac41"} Jan 21 11:15:25 crc kubenswrapper[4881]: I0121 11:15:25.523136 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8"] Jan 21 11:15:25 crc kubenswrapper[4881]: E0121 11:15:25.762413 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 11:15:25 crc kubenswrapper[4881]: E0121 11:15:25.762672 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfj8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-76qxc_openstack-operators(8c8feeec-377c-499a-b666-895010f8ebeb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:25 crc kubenswrapper[4881]: E0121 11:15:25.764506 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podUID="8c8feeec-377c-499a-b666-895010f8ebeb" Jan 21 11:15:25 crc kubenswrapper[4881]: I0121 11:15:25.910830 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" event={"ID":"a55fdb43-cd6c-4415-8ef6-07f6c7da6272","Type":"ContainerStarted","Data":"5c5727274545ebad33744301076582e79e5dc9cc83c053a0dac5467d5716cb2d"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.238697 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb"] Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.342512 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4"] Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.933891 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" event={"ID":"d0cafd1d-5f37-499a-a531-547a137aae21","Type":"ContainerStarted","Data":"134803dd77fbcf302659b8f128e932eb1c9179c03abbc1043d52d65470d38ba1"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.934480 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.937300 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" event={"ID":"848fd8db-3bd5-4e22-96ca-f69b181e48be","Type":"ContainerStarted","Data":"0d5be4fd016179db3483c6888a6d1b657e6fdd493c2a026f0647701c3a1db78c"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.937567 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.938867 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" event={"ID":"bb9b2c3f-4f68-44fc-addf-2cf4421be015","Type":"ContainerStarted","Data":"75cbd7d794f72c24c1153a927c2c056f23e41395f9670194737215511fef8da9"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.939020 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.940165 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" event={"ID":"2fe210a4-2adf-4b55-9a43-c1c390f51b0e","Type":"ContainerStarted","Data":"9c3cd6fd76ccb3e1aebf3c144313292f304e07697bda9f59f0bd38c7102cae69"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.942832 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" event={"ID":"340257c4-9218-49b0-8a75-b2a4e0231fe3","Type":"ContainerStarted","Data":"7acfecd37cad07ec1dd7df4569586025cfb66a05a725369f74cee260b965c5d6"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.943296 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.949180 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" event={"ID":"efb259b7-934f-4bc3-b502-633472d1a1c5","Type":"ContainerStarted","Data":"2ffeca8fec4eb946b8f37fa7f383d2a2fa4c9b2c224984d9449590d48df8fbcc"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.949360 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.953329 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" event={"ID":"ba9a1249-fc58-4809-a472-d199afa9b52b","Type":"ContainerStarted","Data":"582cfcc5c1ddf71d8e17d3aabeca2b879f7bd34e3fbf062b6c5a1d8eeddeb7c6"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.954893 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.971117 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" event={"ID":"36e5ddfe-67a4-4721-9ef5-b9459c64bf5c","Type":"ContainerStarted","Data":"d0564b32fc1cc85ec20378db752a0cd98f3ad490e7279922c5cf5b475bee8972"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.972376 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.979392 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" podStartSLOduration=23.256460966 podStartE2EDuration="46.979370683s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.490070391 +0000 UTC m=+1071.750026850" lastFinishedPulling="2026-01-21 11:15:08.212980098 +0000 UTC m=+1095.472936567" observedRunningTime="2026-01-21 11:15:26.97282632 +0000 UTC m=+1114.232782789" watchObservedRunningTime="2026-01-21 11:15:26.979370683 +0000 UTC m=+1114.239327152" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.985256 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" event={"ID":"c37f0ee6-fcc1-4663-91a3-ab5e47dad851","Type":"ContainerStarted","Data":"b5629bef799bd58fd7c322f334ed2c842d7e326aba733a303f14c5c0f68e0efa"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.987110 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" event={"ID":"1cebbaaf-6189-409a-8f25-43d7fac77f95","Type":"ContainerStarted","Data":"57d2d3483eb94a11159fbf1a965ba524634046511bb7497d4075264dd9f612cc"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.988090 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.003589 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" event={"ID":"c3b86204-5389-4b6a-bd45-fb6ee23b784e","Type":"ContainerStarted","Data":"3d993e2c3c267c5fb1d5c8678bfc830cbf513bb2a53348ecdd6965049ed3d807"} Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.004535 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.008715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" event={"ID":"a55fdb43-cd6c-4415-8ef6-07f6c7da6272","Type":"ContainerStarted","Data":"a966ab60808193570083f09ccfb55452509cf01f5e2a2fc1c5f47bae085f504e"} Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.009844 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.026347 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" event={"ID":"4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f","Type":"ContainerStarted","Data":"593d7c66925d15e46c74a91403fde16bbc659993e2f13211ec8ae807ed8ad22e"} Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.027720 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.075980 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" podStartSLOduration=23.904754909 podStartE2EDuration="47.075962499s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.24143428 +0000 UTC m=+1071.501390749" lastFinishedPulling="2026-01-21 11:15:07.41264186 +0000 UTC m=+1094.672598339" observedRunningTime="2026-01-21 11:15:27.036032745 +0000 UTC m=+1114.295989214" watchObservedRunningTime="2026-01-21 11:15:27.075962499 +0000 UTC m=+1114.335918968" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.078711 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" podStartSLOduration=6.503846803 podStartE2EDuration="48.078704807s" podCreationTimestamp="2026-01-21 11:14:39 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.218917769 +0000 UTC m=+1071.478874238" lastFinishedPulling="2026-01-21 11:15:25.793775773 +0000 UTC m=+1113.053732242" observedRunningTime="2026-01-21 11:15:27.072716748 +0000 UTC m=+1114.332673217" watchObservedRunningTime="2026-01-21 11:15:27.078704807 +0000 UTC m=+1114.338661266" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.609538 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" podStartSLOduration=24.377316083 podStartE2EDuration="47.609517513s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.180355218 +0000 UTC m=+1071.440311687" lastFinishedPulling="2026-01-21 11:15:07.412556648 +0000 UTC m=+1094.672513117" observedRunningTime="2026-01-21 11:15:27.608458367 +0000 UTC m=+1114.868414836" watchObservedRunningTime="2026-01-21 11:15:27.609517513 +0000 UTC m=+1114.869473982" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.715366 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" podStartSLOduration=6.008531089 podStartE2EDuration="47.715345539s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.091591879 +0000 UTC m=+1071.351548348" lastFinishedPulling="2026-01-21 11:15:25.798406329 +0000 UTC m=+1113.058362798" observedRunningTime="2026-01-21 11:15:27.712020056 +0000 UTC m=+1114.971976525" watchObservedRunningTime="2026-01-21 11:15:27.715345539 +0000 UTC m=+1114.975302008" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.853898 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" podStartSLOduration=23.717445864 podStartE2EDuration="47.853861558s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.076320788 +0000 UTC m=+1071.336277257" lastFinishedPulling="2026-01-21 11:15:08.212736482 +0000 UTC m=+1095.472692951" observedRunningTime="2026-01-21 11:15:27.786190612 +0000 UTC m=+1115.046147091" watchObservedRunningTime="2026-01-21 11:15:27.853861558 +0000 UTC m=+1115.113818027" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.152925 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" event={"ID":"8c504afd-e4e0-4676-b292-b569b638a7dd","Type":"ContainerStarted","Data":"a83a5499d8117eabf9e4c8defff59361671d700500aed6e9e45489a025b95b6b"} Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.156694 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.164669 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" event={"ID":"50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb","Type":"ContainerStarted","Data":"1674472cc072745705294b1d7a2ba6968803bca0481f9d4533791647066f7a85"} Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.166914 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.170425 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" event={"ID":"e8e6f423-a07b-4a22-9e39-efa8de22747e","Type":"ContainerStarted","Data":"7979b5e10538277149f3b9bcc1c010cdc994d994df73f8a7a43087eb64a0f49c"} Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.170918 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.295818 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" podStartSLOduration=7.673196027 podStartE2EDuration="49.295798751s" podCreationTimestamp="2026-01-21 11:14:39 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.180050981 +0000 UTC m=+1071.440007450" lastFinishedPulling="2026-01-21 11:15:25.802653705 +0000 UTC m=+1113.062610174" observedRunningTime="2026-01-21 11:15:28.290972821 +0000 UTC m=+1115.550929290" watchObservedRunningTime="2026-01-21 11:15:28.295798751 +0000 UTC m=+1115.555755220" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.345169 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" podStartSLOduration=7.085110125 podStartE2EDuration="48.345152301s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.513489343 +0000 UTC m=+1071.773445812" lastFinishedPulling="2026-01-21 11:15:25.773531499 +0000 UTC m=+1113.033487988" observedRunningTime="2026-01-21 11:15:28.33750352 +0000 UTC m=+1115.597459999" watchObservedRunningTime="2026-01-21 11:15:28.345152301 +0000 UTC m=+1115.605108770" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.561713 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" podStartSLOduration=7.28816487 podStartE2EDuration="48.561687001s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.50006539 +0000 UTC m=+1071.760021859" lastFinishedPulling="2026-01-21 11:15:25.773587511 +0000 UTC m=+1113.033543990" observedRunningTime="2026-01-21 11:15:28.539271333 +0000 UTC m=+1115.799227812" watchObservedRunningTime="2026-01-21 11:15:28.561687001 +0000 UTC m=+1115.821643470" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.567422 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" podStartSLOduration=24.576440632 podStartE2EDuration="48.567401374s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.220047217 +0000 UTC m=+1071.480003696" lastFinishedPulling="2026-01-21 11:15:08.211007969 +0000 UTC m=+1095.470964438" observedRunningTime="2026-01-21 11:15:28.566121242 +0000 UTC m=+1115.826077711" watchObservedRunningTime="2026-01-21 11:15:28.567401374 +0000 UTC m=+1115.827357843" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.616359 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" podStartSLOduration=7.059489275 podStartE2EDuration="48.616338722s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.241588433 +0000 UTC m=+1071.501544902" lastFinishedPulling="2026-01-21 11:15:25.79843788 +0000 UTC m=+1113.058394349" observedRunningTime="2026-01-21 11:15:28.607850291 +0000 UTC m=+1115.867806770" watchObservedRunningTime="2026-01-21 11:15:28.616338722 +0000 UTC m=+1115.876295211" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.782333 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" podStartSLOduration=7.471981286 podStartE2EDuration="48.782309795s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.488154492 +0000 UTC m=+1071.748110961" lastFinishedPulling="2026-01-21 11:15:25.798483001 +0000 UTC m=+1113.058439470" observedRunningTime="2026-01-21 11:15:28.781621268 +0000 UTC m=+1116.041577737" watchObservedRunningTime="2026-01-21 11:15:28.782309795 +0000 UTC m=+1116.042266264" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.782859 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" podStartSLOduration=24.785807954 podStartE2EDuration="48.782852668s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.213969675 +0000 UTC m=+1071.473926144" lastFinishedPulling="2026-01-21 11:15:08.211014389 +0000 UTC m=+1095.470970858" observedRunningTime="2026-01-21 11:15:28.670297996 +0000 UTC m=+1115.930254475" watchObservedRunningTime="2026-01-21 11:15:28.782852668 +0000 UTC m=+1116.042809137" Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.188379 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" event={"ID":"1f795f92-d385-49bc-bc91-5109734f4d5a","Type":"ContainerStarted","Data":"b3edf28ac7eef119da54cafded18ce56ede9f57a68a95eec0a79655af9ea1d0d"} Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.188605 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.198389 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" event={"ID":"a028dcae-6b9d-414d-8bab-652f301de541","Type":"ContainerStarted","Data":"829dee12939d6e36d536226ad4cd65d36d606cc10b5d418fb9e9bfbd4a261f34"} Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.199200 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.209308 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" event={"ID":"c37f0ee6-fcc1-4663-91a3-ab5e47dad851","Type":"ContainerStarted","Data":"4ef110f660eb1c97d787ba6c2683b1ded92c0cd6a25a9dac3c9da2e19fd3d06a"} Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.222660 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" podStartSLOduration=49.222632548 podStartE2EDuration="49.222632548s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:15:28.899450181 +0000 UTC m=+1116.159406660" watchObservedRunningTime="2026-01-21 11:15:29.222632548 +0000 UTC m=+1116.482589017" Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.223405 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" podStartSLOduration=6.342991528 podStartE2EDuration="49.223398378s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.181824656 +0000 UTC m=+1071.441781125" lastFinishedPulling="2026-01-21 11:15:27.062231506 +0000 UTC m=+1114.322187975" observedRunningTime="2026-01-21 11:15:29.218632599 +0000 UTC m=+1116.478589068" watchObservedRunningTime="2026-01-21 11:15:29.223398378 +0000 UTC m=+1116.483354847" Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.993304 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" podStartSLOduration=9.318863134 podStartE2EDuration="50.993279087s" podCreationTimestamp="2026-01-21 11:14:39 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.10048251 +0000 UTC m=+1071.360438979" lastFinishedPulling="2026-01-21 11:15:25.774898463 +0000 UTC m=+1113.034854932" observedRunningTime="2026-01-21 11:15:29.283538175 +0000 UTC m=+1116.543494654" watchObservedRunningTime="2026-01-21 11:15:29.993279087 +0000 UTC m=+1117.253235556" Jan 21 11:15:30 crc kubenswrapper[4881]: I0121 11:15:30.232255 4881 generic.go:334] "Generic (PLEG): container finished" podID="c37f0ee6-fcc1-4663-91a3-ab5e47dad851" containerID="4ef110f660eb1c97d787ba6c2683b1ded92c0cd6a25a9dac3c9da2e19fd3d06a" exitCode=0 Jan 21 11:15:30 crc kubenswrapper[4881]: I0121 11:15:30.233462 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" event={"ID":"c37f0ee6-fcc1-4663-91a3-ab5e47dad851","Type":"ContainerDied","Data":"4ef110f660eb1c97d787ba6c2683b1ded92c0cd6a25a9dac3c9da2e19fd3d06a"} Jan 21 11:15:31 crc kubenswrapper[4881]: I0121 11:15:31.225546 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:15:32 crc kubenswrapper[4881]: E0121 11:15:32.350232 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podUID="2aac430e-3ac8-4624-8575-66386b5c2df3" Jan 21 11:15:32 crc kubenswrapper[4881]: E0121 11:15:32.350262 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podUID="55ce5ee6-47f4-4874-92dc-6ab78f2ce212" Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.694521 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.829979 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv8v5\" (UniqueName: \"kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5\") pod \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.830178 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume\") pod \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.830283 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume\") pod \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.832435 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume" (OuterVolumeSpecName: "config-volume") pod "c37f0ee6-fcc1-4663-91a3-ab5e47dad851" (UID: "c37f0ee6-fcc1-4663-91a3-ab5e47dad851"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.838208 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c37f0ee6-fcc1-4663-91a3-ab5e47dad851" (UID: "c37f0ee6-fcc1-4663-91a3-ab5e47dad851"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.838294 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5" (OuterVolumeSpecName: "kube-api-access-kv8v5") pod "c37f0ee6-fcc1-4663-91a3-ab5e47dad851" (UID: "c37f0ee6-fcc1-4663-91a3-ab5e47dad851"). InnerVolumeSpecName "kube-api-access-kv8v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.011272 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.011305 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.011318 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv8v5\" (UniqueName: \"kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.450257 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" event={"ID":"c37f0ee6-fcc1-4663-91a3-ab5e47dad851","Type":"ContainerDied","Data":"b5629bef799bd58fd7c322f334ed2c842d7e326aba733a303f14c5c0f68e0efa"} Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.450532 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5629bef799bd58fd7c322f334ed2c842d7e326aba733a303f14c5c0f68e0efa" Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.450305 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:34 crc kubenswrapper[4881]: E0121 11:15:34.312218 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podUID="761a1a49-e01e-4674-b1f4-da732e1def98" Jan 21 11:15:36 crc kubenswrapper[4881]: E0121 11:15:36.314099 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podUID="b72b2323-5329-4145-9cee-b447d9e2a304" Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.476704 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" event={"ID":"2fe210a4-2adf-4b55-9a43-c1c390f51b0e","Type":"ContainerStarted","Data":"8a0e87d567a41e21b314b35a5d90caf243d4da3f73e353958f6db8df3bcfc112"} Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.476856 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.478218 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" event={"ID":"b1b17be2-e382-4916-8e53-a68c85b5bfc2","Type":"ContainerStarted","Data":"57e0e7d6fa227adc203daf6f6c58f0611794887404ca6cd9bf60634c2316a2c3"} Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.478392 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.504881 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" podStartSLOduration=47.535564233 podStartE2EDuration="56.504860269s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:15:27.059712115 +0000 UTC m=+1114.319668584" lastFinishedPulling="2026-01-21 11:15:36.029008151 +0000 UTC m=+1123.288964620" observedRunningTime="2026-01-21 11:15:36.503217398 +0000 UTC m=+1123.763173867" watchObservedRunningTime="2026-01-21 11:15:36.504860269 +0000 UTC m=+1123.764816738" Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.534405 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" podStartSLOduration=45.378616515 podStartE2EDuration="56.534389214s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:15:24.85666378 +0000 UTC m=+1112.116620249" lastFinishedPulling="2026-01-21 11:15:36.012436479 +0000 UTC m=+1123.272392948" observedRunningTime="2026-01-21 11:15:36.529837151 +0000 UTC m=+1123.789793630" watchObservedRunningTime="2026-01-21 11:15:36.534389214 +0000 UTC m=+1123.794345683" Jan 21 11:15:37 crc kubenswrapper[4881]: I0121 11:15:37.404023 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:15:38 crc kubenswrapper[4881]: E0121 11:15:38.312092 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podUID="8c8feeec-377c-499a-b666-895010f8ebeb" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.021594 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.025110 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.026550 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.027164 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.029421 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.030023 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.033034 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.072358 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.166030 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.265640 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.328179 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.331140 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.466839 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.467585 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:15:42 crc kubenswrapper[4881]: I0121 11:15:42.613930 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:15:46 crc kubenswrapper[4881]: I0121 11:15:46.858101 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:15:55 crc kubenswrapper[4881]: I0121 11:15:55.812312 4881 scope.go:117] "RemoveContainer" containerID="8d96b6ac2acd440f7e60cdd073c30593c6e0c4417e979419134016d123abd969" Jan 21 11:15:55 crc kubenswrapper[4881]: I0121 11:15:55.852025 4881 scope.go:117] "RemoveContainer" containerID="6c72489f579e659d3691891984c6b73c6e38f55451044ec4d36e63d9b6a30869" Jan 21 11:15:55 crc kubenswrapper[4881]: I0121 11:15:55.873992 4881 scope.go:117] "RemoveContainer" containerID="caff78396a524a2b7173fa89076846a700461a26e3edd64b51c4f8b958b5c232" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.862874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" event={"ID":"8c8feeec-377c-499a-b666-895010f8ebeb","Type":"ContainerStarted","Data":"9ec8d0919021fe429acf31e4c26796cde20929e0c4a91af67e3f588e7748e32c"} Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.880885 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" event={"ID":"55ce5ee6-47f4-4874-92dc-6ab78f2ce212","Type":"ContainerStarted","Data":"cd4f6669f53bcdd461f3289f7839a164427dd1a2eab328184ab161ff72233590"} Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.880926 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" event={"ID":"2aac430e-3ac8-4624-8575-66386b5c2df3","Type":"ContainerStarted","Data":"f0d8a93ee3a6c1809723ace8d21684a8771c184c59fe96d0c200e76d2b7449bb"} Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.880940 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" event={"ID":"761a1a49-e01e-4674-b1f4-da732e1def98","Type":"ContainerStarted","Data":"fead8bd9d051fcfdfde9c0e76860cb7fe7f5e2785f04931a88723424452e79bd"} Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.884642 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.884736 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.884762 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.888554 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" event={"ID":"b72b2323-5329-4145-9cee-b447d9e2a304","Type":"ContainerStarted","Data":"65b6350c2a2757964d8fd1a52b1d961e92fcb2f9c327fcc1b8fa9828886fe533"} Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.889762 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.916111 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podStartSLOduration=7.254934793 podStartE2EDuration="1m19.916088746s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.508911329 +0000 UTC m=+1071.768867798" lastFinishedPulling="2026-01-21 11:15:57.170065282 +0000 UTC m=+1144.430021751" observedRunningTime="2026-01-21 11:15:59.910503357 +0000 UTC m=+1147.170459826" watchObservedRunningTime="2026-01-21 11:15:59.916088746 +0000 UTC m=+1147.176045215" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.933327 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podStartSLOduration=12.225210368 podStartE2EDuration="1m19.933308894s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.53543293 +0000 UTC m=+1071.795389399" lastFinishedPulling="2026-01-21 11:15:52.243531456 +0000 UTC m=+1139.503487925" observedRunningTime="2026-01-21 11:15:59.92991124 +0000 UTC m=+1147.189867709" watchObservedRunningTime="2026-01-21 11:15:59.933308894 +0000 UTC m=+1147.193265363" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.951528 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podStartSLOduration=6.889058793 podStartE2EDuration="1m19.951503687s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.241703636 +0000 UTC m=+1071.501660105" lastFinishedPulling="2026-01-21 11:15:57.30414853 +0000 UTC m=+1144.564104999" observedRunningTime="2026-01-21 11:15:59.950111052 +0000 UTC m=+1147.210067521" watchObservedRunningTime="2026-01-21 11:15:59.951503687 +0000 UTC m=+1147.211460156" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.966391 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podStartSLOduration=7.126640258 podStartE2EDuration="1m19.966368038s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.241965912 +0000 UTC m=+1071.501922381" lastFinishedPulling="2026-01-21 11:15:57.081693692 +0000 UTC m=+1144.341650161" observedRunningTime="2026-01-21 11:15:59.962532482 +0000 UTC m=+1147.222488971" watchObservedRunningTime="2026-01-21 11:15:59.966368038 +0000 UTC m=+1147.226324507" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.984381 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podStartSLOduration=12.24537667 podStartE2EDuration="1m19.984363616s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.513071873 +0000 UTC m=+1071.773028342" lastFinishedPulling="2026-01-21 11:15:52.252058819 +0000 UTC m=+1139.512015288" observedRunningTime="2026-01-21 11:15:59.983821412 +0000 UTC m=+1147.243777881" watchObservedRunningTime="2026-01-21 11:15:59.984363616 +0000 UTC m=+1147.244320075" Jan 21 11:16:01 crc kubenswrapper[4881]: I0121 11:16:01.667943 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:16:10 crc kubenswrapper[4881]: I0121 11:16:10.626394 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:16:10 crc kubenswrapper[4881]: I0121 11:16:10.868897 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:16:11 crc kubenswrapper[4881]: I0121 11:16:11.419191 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.641657 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:16:33 crc kubenswrapper[4881]: E0121 11:16:33.642635 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c37f0ee6-fcc1-4663-91a3-ab5e47dad851" containerName="collect-profiles" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.642651 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c37f0ee6-fcc1-4663-91a3-ab5e47dad851" containerName="collect-profiles" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.642848 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c37f0ee6-fcc1-4663-91a3-ab5e47dad851" containerName="collect-profiles" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.645802 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.651454 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.655492 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.655523 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-q8h4t" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.657866 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.657882 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.746037 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nlz\" (UniqueName: \"kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.746168 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.816210 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.817525 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.819491 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.825993 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.847575 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.847650 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7nlz\" (UniqueName: \"kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.848547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.866383 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7nlz\" (UniqueName: \"kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.948756 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj4sc\" (UniqueName: \"kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.949131 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.949244 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.974942 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.051063 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj4sc\" (UniqueName: \"kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.051382 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.051430 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.052273 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.052401 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.075096 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj4sc\" (UniqueName: \"kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.137484 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.337638 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.694751 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:16:34 crc kubenswrapper[4881]: W0121 11:16:34.703519 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d59d9e0_8dd3_4bbd_ab3c_01e0e4a3b338.slice/crio-b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820 WatchSource:0}: Error finding container b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820: Status 404 returned error can't find the container with id b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820 Jan 21 11:16:35 crc kubenswrapper[4881]: I0121 11:16:35.249451 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" event={"ID":"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338","Type":"ContainerStarted","Data":"b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820"} Jan 21 11:16:35 crc kubenswrapper[4881]: I0121 11:16:35.251422 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" event={"ID":"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa","Type":"ContainerStarted","Data":"385e3ff947423b95dcd5a48ddbdf919434e21551c87e247766e40b37cfc15a72"} Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.141811 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.177013 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.179245 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.192036 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.221602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.221739 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zwhs\" (UniqueName: \"kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.221848 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.323707 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.324055 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.324117 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zwhs\" (UniqueName: \"kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.325330 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.325491 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.357668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zwhs\" (UniqueName: \"kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.517848 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.519746 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.545583 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.548603 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.568021 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.634582 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.634628 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.634716 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gf75\" (UniqueName: \"kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.736420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.736968 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.737004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gf75\" (UniqueName: \"kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.738649 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.739291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.778242 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gf75\" (UniqueName: \"kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.895237 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.922228 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.924223 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.976966 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.063760 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.083756 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.083898 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.090324 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnn2q\" (UniqueName: \"kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.191981 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.192042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.192088 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnn2q\" (UniqueName: \"kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.193463 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.193720 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.216931 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnn2q\" (UniqueName: \"kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.664176 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.738196 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.739571 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.740002 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.745099 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.745147 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.745241 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754124 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754348 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754423 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754603 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754633 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754745 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754774 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754932 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754982 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754940 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.755265 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tt7xn" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.755402 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.755653 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x9qrf" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.762908 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.769821 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:16:39 crc kubenswrapper[4881]: W0121 11:16:39.813974 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb0e6ce6_181c_4edb_b4b3_d169c41c63a8.slice/crio-8b64289332b9bf6e24ce3af64b2717f89e14cd1b712818252df454ed0a94562c WatchSource:0}: Error finding container 8b64289332b9bf6e24ce3af64b2717f89e14cd1b712818252df454ed0a94562c: Status 404 returned error can't find the container with id 8b64289332b9bf6e24ce3af64b2717f89e14cd1b712818252df454ed0a94562c Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872122 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872673 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872716 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872741 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872812 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872844 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872894 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872928 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872951 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872978 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873012 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873053 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873088 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjgnd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873226 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873945 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874100 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmd5s\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874150 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874180 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874243 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874281 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874316 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.980266 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982087 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982137 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982179 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982212 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjgnd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982243 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982281 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982317 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982342 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmd5s\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982368 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982391 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982447 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982470 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982498 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982525 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982552 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982572 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982604 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982633 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982666 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982883 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.984188 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.985227 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.985576 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.986093 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.987314 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.987394 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.988241 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.988276 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.989413 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:39.992461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:39.992862 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:39.992930 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.005290 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.013178 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjgnd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.018683 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.021837 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmd5s\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.036693 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.049771 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.050233 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.057654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.064471 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.074370 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.136126 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.181234 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.184686 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.189926 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.196952 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.199260 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.199440 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.199691 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.199838 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.200469 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.200596 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.202449 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-fc7sw" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.291201 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292198 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292277 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44bcf219-3358-4596-9d1e-88a51c415266-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292316 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292343 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292361 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5n6k\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-kube-api-access-q5n6k\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292383 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292402 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292418 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292458 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44bcf219-3358-4596-9d1e-88a51c415266-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292480 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.297850 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394306 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394399 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44bcf219-3358-4596-9d1e-88a51c415266-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394437 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394453 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394472 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5n6k\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-kube-api-access-q5n6k\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394493 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394512 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394533 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394568 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394606 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44bcf219-3358-4596-9d1e-88a51c415266-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394626 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394934 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.415055 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.423648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.426164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.426695 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.435636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44bcf219-3358-4596-9d1e-88a51c415266-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.436796 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.437762 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.449443 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.451457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.453729 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5n6k\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-kube-api-access-q5n6k\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.464395 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44bcf219-3358-4596-9d1e-88a51c415266-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.473623 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.584329 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:40.855747 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" event={"ID":"aec91505-d39a-41cf-90af-1593bcb02e68","Type":"ContainerStarted","Data":"11e9d0f8032d3e65513f2d8249ce3ac74bc1a4ddfcd269afe6c654eddabc71b8"} Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:40.859929 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" event={"ID":"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8","Type":"ContainerStarted","Data":"8b64289332b9bf6e24ce3af64b2717f89e14cd1b712818252df454ed0a94562c"} Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:40.863777 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" event={"ID":"99aba8a6-cc58-43be-9607-8ae1fcb57257","Type":"ContainerStarted","Data":"3ca12aa1fc94ac25d568434ebdd78b6fc24b1d504a1ce7b61d9ef849d50cf128"} Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.260492 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: W0121 11:16:41.332070 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod078c2368_b247_49d4_8723_fd93918e99b1.slice/crio-cb426b0ea6a917959cdcac6b6915e9a598cb2f51672af4e37994bc672acc84c9 WatchSource:0}: Error finding container cb426b0ea6a917959cdcac6b6915e9a598cb2f51672af4e37994bc672acc84c9: Status 404 returned error can't find the container with id cb426b0ea6a917959cdcac6b6915e9a598cb2f51672af4e37994bc672acc84c9 Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.468699 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.470298 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.477686 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.477957 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.478061 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-q8hmw" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.478481 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.483210 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.487259 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.625137 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.625334 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-default\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.625487 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.625824 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.625922 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r44km\" (UniqueName: \"kubernetes.io/projected/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kube-api-access-r44km\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.626013 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kolla-config\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.626036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.626210 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.728743 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.728868 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-default\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.728946 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729097 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729093 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729175 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r44km\" (UniqueName: \"kubernetes.io/projected/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kube-api-access-r44km\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729250 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kolla-config\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729276 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729351 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.730031 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-default\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.730294 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kolla-config\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.732019 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.733302 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.739264 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.742273 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.749499 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r44km\" (UniqueName: \"kubernetes.io/projected/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kube-api-access-r44km\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.807158 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.900914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerStarted","Data":"cb426b0ea6a917959cdcac6b6915e9a598cb2f51672af4e37994bc672acc84c9"} Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.103161 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.432374 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.435094 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.442724 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.442956 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.443053 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mgnz4" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.443145 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546438 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd1973a5-773b-438b-aab7-709fb828324d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546500 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4phxd\" (UniqueName: \"kubernetes.io/projected/cd1973a5-773b-438b-aab7-709fb828324d-kube-api-access-4phxd\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546551 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546622 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546640 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546729 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546805 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546842 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.619794 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.777277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.777826 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.777945 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.778100 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.778205 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.778256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd1973a5-773b-438b-aab7-709fb828324d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.778288 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4phxd\" (UniqueName: \"kubernetes.io/projected/cd1973a5-773b-438b-aab7-709fb828324d-kube-api-access-4phxd\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.781130 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.788560 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.790255 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.790283 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd1973a5-773b-438b-aab7-709fb828324d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.793448 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.794934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.796202 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.796397 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.798632 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.800394 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.815244 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-t9dg7" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.815468 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.815730 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.821213 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.851329 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4phxd\" (UniqueName: \"kubernetes.io/projected/cd1973a5-773b-438b-aab7-709fb828324d-kube-api-access-4phxd\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.851418 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.908909 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.920236 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.953366 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: W0121 11:16:42.982172 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44bcf219_3358_4596_9d1e_88a51c415266.slice/crio-16c5e3afc533af42a0c79aba5b8ac657c33f906308b39274db955a90bb51ea58 WatchSource:0}: Error finding container 16c5e3afc533af42a0c79aba5b8ac657c33f906308b39274db955a90bb51ea58: Status 404 returned error can't find the container with id 16c5e3afc533af42a0c79aba5b8ac657c33f906308b39274db955a90bb51ea58 Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.998957 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.999024 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-kolla-config\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.999061 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-config-data\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.999085 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.999108 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g444t\" (UniqueName: \"kubernetes.io/projected/7960c16a-de64-4154-9072-aee49e3bd573-kube-api-access-g444t\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: W0121 11:16:43.037000 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7e90972_9be1_4d3e_852e_e7f7df6e6623.slice/crio-0407be0eb8897677e11cb341e14b52b133b745f624185504d845fdccc7ff50c4 WatchSource:0}: Error finding container 0407be0eb8897677e11cb341e14b52b133b745f624185504d845fdccc7ff50c4: Status 404 returned error can't find the container with id 0407be0eb8897677e11cb341e14b52b133b745f624185504d845fdccc7ff50c4 Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.100418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-config-data\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.100618 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.100726 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g444t\" (UniqueName: \"kubernetes.io/projected/7960c16a-de64-4154-9072-aee49e3bd573-kube-api-access-g444t\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.102611 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-config-data\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.102709 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.102796 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-kolla-config\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.104216 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-kolla-config\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.109803 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.111005 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.144956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g444t\" (UniqueName: \"kubernetes.io/projected/7960c16a-de64-4154-9072-aee49e3bd573-kube-api-access-g444t\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.243677 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.451574 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 11:16:43 crc kubenswrapper[4881]: W0121 11:16:43.751846 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod197dd5bf_f68a_4d9d_b75c_de87a54ed46b.slice/crio-8a08ae4a936f9bbaf1abb307c032317c77dd53689a6e37ad792df8ddb1603258 WatchSource:0}: Error finding container 8a08ae4a936f9bbaf1abb307c032317c77dd53689a6e37ad792df8ddb1603258: Status 404 returned error can't find the container with id 8a08ae4a936f9bbaf1abb307c032317c77dd53689a6e37ad792df8ddb1603258 Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.844820 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.990776 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerStarted","Data":"0407be0eb8897677e11cb341e14b52b133b745f624185504d845fdccc7ff50c4"} Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.995183 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"197dd5bf-f68a-4d9d-b75c-de87a54ed46b","Type":"ContainerStarted","Data":"8a08ae4a936f9bbaf1abb307c032317c77dd53689a6e37ad792df8ddb1603258"} Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.006059 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cd1973a5-773b-438b-aab7-709fb828324d","Type":"ContainerStarted","Data":"32b56190a0a8319e5d34df079d4aefc4527f4f97d92ba67b2ab0a2552ab4c75b"} Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.028843 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"44bcf219-3358-4596-9d1e-88a51c415266","Type":"ContainerStarted","Data":"16c5e3afc533af42a0c79aba5b8ac657c33f906308b39274db955a90bb51ea58"} Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.381507 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 11:16:44 crc kubenswrapper[4881]: W0121 11:16:44.398765 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7960c16a_de64_4154_9072_aee49e3bd573.slice/crio-6823e1cf605be543d2ea341657a2ff74c8a83ab32d1b0fd041ebf61158f070cf WatchSource:0}: Error finding container 6823e1cf605be543d2ea341657a2ff74c8a83ab32d1b0fd041ebf61158f070cf: Status 404 returned error can't find the container with id 6823e1cf605be543d2ea341657a2ff74c8a83ab32d1b0fd041ebf61158f070cf Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.578501 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.583341 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.586888 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bs89w" Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.598016 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.606093 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25992\" (UniqueName: \"kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992\") pod \"kube-state-metrics-0\" (UID: \"c5b6c25e-e882-4ea4-a284-6f55bfe75093\") " pod="openstack/kube-state-metrics-0" Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.710252 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25992\" (UniqueName: \"kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992\") pod \"kube-state-metrics-0\" (UID: \"c5b6c25e-e882-4ea4-a284-6f55bfe75093\") " pod="openstack/kube-state-metrics-0" Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.765872 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25992\" (UniqueName: \"kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992\") pod \"kube-state-metrics-0\" (UID: \"c5b6c25e-e882-4ea4-a284-6f55bfe75093\") " pod="openstack/kube-state-metrics-0" Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.945065 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.048876 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7960c16a-de64-4154-9072-aee49e3bd573","Type":"ContainerStarted","Data":"6823e1cf605be543d2ea341657a2ff74c8a83ab32d1b0fd041ebf61158f070cf"} Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.934267 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.939475 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.945370 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.947457 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.947750 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.947733 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.948264 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.947903 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.948880 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jwvdx" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.949069 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.949210 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.021266 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.021544 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.021608 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022016 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022107 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022301 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022417 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022563 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vkg\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022666 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022722 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.102230 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128745 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vkg\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128845 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128877 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128908 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128961 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128979 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.129011 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.129031 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.129060 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.129085 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.130748 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.133578 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.134087 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.150970 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.151678 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.158414 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.177981 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vkg\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.187558 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.209872 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.218997 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.219064 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c91253029fdcc57c7bcc13c4ee1dc503079fe71761fa62e5d04837e0b8b075e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.366609 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.623056 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:47 crc kubenswrapper[4881]: I0121 11:16:47.147528 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c5b6c25e-e882-4ea4-a284-6f55bfe75093","Type":"ContainerStarted","Data":"a902e47db0ad78d4b1a0c530458a8cc5f24a6bbadf9cb6042572a73fad768c2d"} Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.129122 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:16:48 crc kubenswrapper[4881]: W0121 11:16:48.160924 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75733567_f2a6_4331_bdea_147126213437.slice/crio-648f9884533415a5c2309f4dd9efc2ccd6cbaeb098dca1475cdb0221de466d52 WatchSource:0}: Error finding container 648f9884533415a5c2309f4dd9efc2ccd6cbaeb098dca1475cdb0221de466d52: Status 404 returned error can't find the container with id 648f9884533415a5c2309f4dd9efc2ccd6cbaeb098dca1475cdb0221de466d52 Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.559776 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s642n"] Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.561087 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.565016 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-kxx24" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.565208 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.565310 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-2rtl8"] Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.566729 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.567026 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.571588 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s642n"] Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625372 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-ovn-controller-tls-certs\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625433 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-etc-ovs\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625464 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnx8p\" (UniqueName: \"kubernetes.io/projected/9ff4a63e-40e5-4133-967e-9ba083f3603b-kube-api-access-bnx8p\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625497 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625513 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-log\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625527 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625544 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-log-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625572 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcpzd\" (UniqueName: \"kubernetes.io/projected/256e0b4a-baac-415c-94c6-09f08fa09c7c-kube-api-access-kcpzd\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625596 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-lib\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625632 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-run\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625648 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ff4a63e-40e5-4133-967e-9ba083f3603b-scripts\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625665 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0b4a-baac-415c-94c6-09f08fa09c7c-scripts\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-combined-ca-bundle\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.648563 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2rtl8"] Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.727839 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-lib\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.728053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-run\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.728086 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ff4a63e-40e5-4133-967e-9ba083f3603b-scripts\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.728135 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0b4a-baac-415c-94c6-09f08fa09c7c-scripts\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.728160 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-combined-ca-bundle\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729154 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-ovn-controller-tls-certs\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729317 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-etc-ovs\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729450 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnx8p\" (UniqueName: \"kubernetes.io/projected/9ff4a63e-40e5-4133-967e-9ba083f3603b-kube-api-access-bnx8p\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729570 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729657 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729696 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-log\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729740 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-log-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729900 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcpzd\" (UniqueName: \"kubernetes.io/projected/256e0b4a-baac-415c-94c6-09f08fa09c7c-kube-api-access-kcpzd\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.731439 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-lib\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.884238 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-etc-ovs\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.886442 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ff4a63e-40e5-4133-967e-9ba083f3603b-scripts\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.886914 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-log\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.890044 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-log-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.891337 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0b4a-baac-415c-94c6-09f08fa09c7c-scripts\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.901371 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-ovn-controller-tls-certs\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.909861 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.913521 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925283 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcpzd\" (UniqueName: \"kubernetes.io/projected/256e0b4a-baac-415c-94c6-09f08fa09c7c-kube-api-access-kcpzd\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925500 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnx8p\" (UniqueName: \"kubernetes.io/projected/9ff4a63e-40e5-4133-967e-9ba083f3603b-kube-api-access-bnx8p\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925586 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925801 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.926078 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925836 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4pbz9" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925872 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.952395 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-combined-ca-bundle\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.956711 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.053203 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-run\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.053232 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.053307 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125502 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125562 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldh7\" (UniqueName: \"kubernetes.io/projected/24136f67-aca3-4e40-b3c2-b36b7623475f-kube-api-access-8ldh7\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125643 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-config\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125676 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125771 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125916 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.126258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.126354 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.228628 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.228763 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.228836 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.228872 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.228900 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.229051 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.229094 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ldh7\" (UniqueName: \"kubernetes.io/projected/24136f67-aca3-4e40-b3c2-b36b7623475f-kube-api-access-8ldh7\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.229146 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-config\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.230532 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-config\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.231799 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.237956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.238447 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.264128 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.265078 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.265550 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.273183 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerStarted","Data":"648f9884533415a5c2309f4dd9efc2ccd6cbaeb098dca1475cdb0221de466d52"} Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.275887 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ldh7\" (UniqueName: \"kubernetes.io/projected/24136f67-aca3-4e40-b3c2-b36b7623475f-kube-api-access-8ldh7\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.279338 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.280013 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.299967 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.375711 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:50 crc kubenswrapper[4881]: I0121 11:16:50.785919 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s642n"] Jan 21 11:16:50 crc kubenswrapper[4881]: W0121 11:16:50.804014 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod256e0b4a_baac_415c_94c6_09f08fa09c7c.slice/crio-792637da0f41910247ec89409c055d88e952498fc8631144ebd9d17e5ca5afee WatchSource:0}: Error finding container 792637da0f41910247ec89409c055d88e952498fc8631144ebd9d17e5ca5afee: Status 404 returned error can't find the container with id 792637da0f41910247ec89409c055d88e952498fc8631144ebd9d17e5ca5afee Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.011864 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 11:16:51 crc kubenswrapper[4881]: W0121 11:16:51.054629 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24136f67_aca3_4e40_b3c2_b36b7623475f.slice/crio-0c0a112b00c037b00e1b246da95812e106e2db48db41ce77888ffd489bdc7c92 WatchSource:0}: Error finding container 0c0a112b00c037b00e1b246da95812e106e2db48db41ce77888ffd489bdc7c92: Status 404 returned error can't find the container with id 0c0a112b00c037b00e1b246da95812e106e2db48db41ce77888ffd489bdc7c92 Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.263248 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2rtl8"] Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.371522 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"24136f67-aca3-4e40-b3c2-b36b7623475f","Type":"ContainerStarted","Data":"0c0a112b00c037b00e1b246da95812e106e2db48db41ce77888ffd489bdc7c92"} Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.376156 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n" event={"ID":"256e0b4a-baac-415c-94c6-09f08fa09c7c","Type":"ContainerStarted","Data":"792637da0f41910247ec89409c055d88e952498fc8631144ebd9d17e5ca5afee"} Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.382367 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2rtl8" event={"ID":"9ff4a63e-40e5-4133-967e-9ba083f3603b","Type":"ContainerStarted","Data":"d1dcf19190c032a44507986d2f5617f115b9bb86905eadaa8c6882cc529a7d3c"} Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.652255 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.656687 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.664191 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.664860 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.665413 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zdddp" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.667580 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.679072 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.738590 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5dzhr"] Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.740087 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.749806 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.759257 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5dzhr"] Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898214 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-config\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898374 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-combined-ca-bundle\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898406 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovs-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898510 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898535 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tlx9\" (UniqueName: \"kubernetes.io/projected/b9bd229b-588d-477e-8501-cd976b539e3a-kube-api-access-7tlx9\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898673 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898721 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgk6r\" (UniqueName: \"kubernetes.io/projected/c3884c64-25d6-42b5-a309-7eafa170719e-kube-api-access-vgk6r\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898801 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899158 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899254 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899312 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovn-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899355 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899408 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bd229b-588d-477e-8501-cd976b539e3a-config\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899485 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.003888 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.003950 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tlx9\" (UniqueName: \"kubernetes.io/projected/b9bd229b-588d-477e-8501-cd976b539e3a-kube-api-access-7tlx9\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004023 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004057 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgk6r\" (UniqueName: \"kubernetes.io/projected/c3884c64-25d6-42b5-a309-7eafa170719e-kube-api-access-vgk6r\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004102 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004143 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004180 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004254 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovn-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004285 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004318 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bd229b-588d-477e-8501-cd976b539e3a-config\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004417 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-config\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004460 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-combined-ca-bundle\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004484 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovs-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004957 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovs-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.006349 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.006901 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.007139 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.007366 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bd229b-588d-477e-8501-cd976b539e3a-config\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.009883 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovn-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.010823 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.012046 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-config\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.029214 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.029933 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.031095 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.031493 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-combined-ca-bundle\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.060059 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tlx9\" (UniqueName: \"kubernetes.io/projected/b9bd229b-588d-477e-8501-cd976b539e3a-kube-api-access-7tlx9\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.093316 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgk6r\" (UniqueName: \"kubernetes.io/projected/c3884c64-25d6-42b5-a309-7eafa170719e-kube-api-access-vgk6r\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.101365 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.103093 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.108396 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.113165 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.168681 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.263077 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.263460 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.332618 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.394188 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.394515 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.394591 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4lhq\" (UniqueName: \"kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.394676 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.551697 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.552284 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.552341 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4lhq\" (UniqueName: \"kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.552369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.553558 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.558902 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.559501 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.581975 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4lhq\" (UniqueName: \"kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.607252 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:55 crc kubenswrapper[4881]: I0121 11:16:55.114764 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 11:16:55 crc kubenswrapper[4881]: I0121 11:16:55.604777 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5dzhr"] Jan 21 11:16:58 crc kubenswrapper[4881]: W0121 11:16:58.378670 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3884c64_25d6_42b5_a309_7eafa170719e.slice/crio-e70f087787468da8f67f380f8c1a171bd117d7c55ff0c085df1f8c6975cbc30b WatchSource:0}: Error finding container e70f087787468da8f67f380f8c1a171bd117d7c55ff0c085df1f8c6975cbc30b: Status 404 returned error can't find the container with id e70f087787468da8f67f380f8c1a171bd117d7c55ff0c085df1f8c6975cbc30b Jan 21 11:16:59 crc kubenswrapper[4881]: I0121 11:16:59.274682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c3884c64-25d6-42b5-a309-7eafa170719e","Type":"ContainerStarted","Data":"e70f087787468da8f67f380f8c1a171bd117d7c55ff0c085df1f8c6975cbc30b"} Jan 21 11:16:59 crc kubenswrapper[4881]: I0121 11:16:59.392585 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:16:59 crc kubenswrapper[4881]: I0121 11:16:59.850942 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:16:59 crc kubenswrapper[4881]: I0121 11:16:59.851010 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:17:15 crc kubenswrapper[4881]: W0121 11:17:15.800395 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9bd229b_588d_477e_8501_cd976b539e3a.slice/crio-be2a0d6b1ba15f8d0d2b6045bf47f1d37d53e641d993a41a219ad2098fcd13ed WatchSource:0}: Error finding container be2a0d6b1ba15f8d0d2b6045bf47f1d37d53e641d993a41a219ad2098fcd13ed: Status 404 returned error can't find the container with id be2a0d6b1ba15f8d0d2b6045bf47f1d37d53e641d993a41a219ad2098fcd13ed Jan 21 11:17:15 crc kubenswrapper[4881]: W0121 11:17:15.954373 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42132c17_6a2d_48d1_a636_3eae7558d55c.slice/crio-0a92f372c9af6d73af85424fa74f5bca2b7445ea9a9d2271fd330b7797ed5b0d WatchSource:0}: Error finding container 0a92f372c9af6d73af85424fa74f5bca2b7445ea9a9d2271fd330b7797ed5b0d: Status 404 returned error can't find the container with id 0a92f372c9af6d73af85424fa74f5bca2b7445ea9a9d2271fd330b7797ed5b0d Jan 21 11:17:16 crc kubenswrapper[4881]: I0121 11:17:16.427191 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5dzhr" event={"ID":"b9bd229b-588d-477e-8501-cd976b539e3a","Type":"ContainerStarted","Data":"be2a0d6b1ba15f8d0d2b6045bf47f1d37d53e641d993a41a219ad2098fcd13ed"} Jan 21 11:17:16 crc kubenswrapper[4881]: I0121 11:17:16.429031 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerStarted","Data":"0a92f372c9af6d73af85424fa74f5bca2b7445ea9a9d2271fd330b7797ed5b0d"} Jan 21 11:17:16 crc kubenswrapper[4881]: E0121 11:17:16.555265 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-rabbitmq/blobs/sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b\": context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:16 crc kubenswrapper[4881]: E0121 11:17:16.555336 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-rabbitmq/blobs/sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b\": context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:16 crc kubenswrapper[4881]: E0121 11:17:16.555547 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmd5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(078c2368-b247-49d4-8723-fd93918e99b1): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-rabbitmq/blobs/sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b\": context canceled" logger="UnhandledError" Jan 21 11:17:16 crc kubenswrapper[4881]: E0121 11:17:16.556891 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b: Get \\\"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-rabbitmq/blobs/sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b\\\": context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="078c2368-b247-49d4-8723-fd93918e99b1" Jan 21 11:17:17 crc kubenswrapper[4881]: E0121 11:17:17.442753 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="078c2368-b247-49d4-8723-fd93918e99b1" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.025380 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5\": context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.025479 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5\": context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.025667 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:38.102.83.182:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncbh5ffh56dh7chbdh75h58h5d4h5bfh596h576h5ddh7bh86h56dh677h58dh687h66bh676h67ch55ch667h68hf4h78h555h79h5fch67bh95h698q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ldh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(24136f67-aca3-4e40-b3c2-b36b7623475f): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5\": context canceled" logger="UnhandledError" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.039916 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.039999 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.040133 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r44km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(197dd5bf-f68a-4d9d-b75c-de87a54ed46b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.041413 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="197dd5bf-f68a-4d9d-b75c-de87a54ed46b" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.056200 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.056292 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.056551 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4phxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(cd1973a5-773b-438b-aab7-709fb828324d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.057899 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.452174 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-galera-0" podUID="197dd5bf-f68a-4d9d-b75c-de87a54ed46b" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.453442 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.675230 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.675556 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2vkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(75733567-f2a6-4331-bdea-147126213437): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.676970 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" Jan 21 11:17:19 crc kubenswrapper[4881]: E0121 11:17:19.460174 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" Jan 21 11:17:23 crc kubenswrapper[4881]: E0121 11:17:23.329886 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:23 crc kubenswrapper[4881]: E0121 11:17:23.330251 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:23 crc kubenswrapper[4881]: E0121 11:17:23.330436 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjgnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(f7e90972-9be1-4d3e-852e-e7f7df6e6623): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:23 crc kubenswrapper[4881]: E0121 11:17:23.331654 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" Jan 21 11:17:23 crc kubenswrapper[4881]: E0121 11:17:23.499646 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.019539 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-memcached:watcher_latest" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.019603 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-memcached:watcher_latest" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.019768 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:38.102.83.182:5001/podified-master-centos10/openstack-memcached:watcher_latest,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n55fh5bch5f7hc7h556h5d5h95h678h54dh7fh6bh5b7h95h59bh65h66ch89hc4h599hbbh685h676hd8hf4h84h5b7h686h55bh65h55ch5c8h658q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g444t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(7960c16a-de64-4154-9072-aee49e3bd573): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.021351 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="7960c16a-de64-4154-9072-aee49e3bd573" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.237183 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.237253 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.237499 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5n6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_openstack(44bcf219-3358-4596-9d1e-88a51c415266): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.239916 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-notifications-server-0" podUID="44bcf219-3358-4596-9d1e-88a51c415266" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.505886 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-memcached:watcher_latest\\\"\"" pod="openstack/memcached-0" podUID="7960c16a-de64-4154-9072-aee49e3bd573" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.506007 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="44bcf219-3358-4596-9d1e-88a51c415266" Jan 21 11:17:25 crc kubenswrapper[4881]: E0121 11:17:25.474007 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Jan 21 11:17:25 crc kubenswrapper[4881]: E0121 11:17:25.474123 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Jan 21 11:17:25 crc kubenswrapper[4881]: E0121 11:17:25.474304 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.102.83.182:5001/podified-master-centos10/openstack-ovn-base:watcher_latest,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7h694h5f6h59bh566h87h9h7h686h54fhbfh668h599h596hbfh595h5bfh65ch54fh8bh64bh587h559h569hcdhddh54dh56bh5c8hfdh65dh57dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnx8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-2rtl8_openstack(9ff4a63e-40e5-4133-967e-9ba083f3603b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:25 crc kubenswrapper[4881]: E0121 11:17:25.476251 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-2rtl8" podUID="9ff4a63e-40e5-4133-967e-9ba083f3603b" Jan 21 11:17:25 crc kubenswrapper[4881]: E0121 11:17:25.511328 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-ovn-base:watcher_latest\\\"\"" pod="openstack/ovn-controller-ovs-2rtl8" podUID="9ff4a63e-40e5-4133-967e-9ba083f3603b" Jan 21 11:17:29 crc kubenswrapper[4881]: I0121 11:17:29.852171 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:17:29 crc kubenswrapper[4881]: I0121 11:17:29.852765 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.683810 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.684318 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.684447 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj4sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-66b6fdbd65-2qwr2_openstack(5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.685653 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" podUID="5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.800411 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.800476 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.800657 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.182:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7h694h5f6h59bh566h87h9h7h686h54fhbfh668h599h596hbfh595h5bfh65ch54fh8bh64bh587h559h569hcdhddh54dh56bh5c8hfdh65dh57dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcpzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-s642n_openstack(256e0b4a-baac-415c-94c6-09f08fa09c7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.801905 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-s642n" podUID="256e0b4a-baac-415c-94c6-09f08fa09c7c" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.846712 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.846768 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.846916 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gf75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7457897f45-vkp6c_openstack(99aba8a6-cc58-43be-9607-8ae1fcb57257): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.848076 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.849938 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.849973 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.850076 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dh59h578h67chf9h6h5cch694h9ch677h67fh657h5bfh65dh67fhb8h68dh5dfhf9h55bhcfh84h698h549h5b9h59bh5c8h647h557h9dh57bh5d5q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4lhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-fd8d879fc-flqh9_openstack(42132c17-6a2d-48d1-a636-3eae7558d55c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.851513 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.206961 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.207026 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.207194 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.182:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7h674h67dh544h54ch679h557h59ch545h59ch547h69hfch5f8h5f7h575h57fh79h5d7h8ch569h679h5cch5fh5cch56ch5d4hdch645h596h66hd6q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgk6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(c3884c64-25d6-42b5-a309-7eafa170719e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.251445 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.251585 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.251726 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h56dh5cfh8bh54fhbbhf4h5b9hdch67fhd7h55fh55fh6ch9h548h54ch665h647h6h8fhd6h5dfh5cdh58bh577h66fh695h5fbh55h77h5fcq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnn2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6557d744f-gt5cx_openstack(aec91505-d39a-41cf-90af-1593bcb02e68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.253005 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" podUID="aec91505-d39a-41cf-90af-1593bcb02e68" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.400854 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.401024 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:nc9h8ch67h5bdh5fch589h98h67bh99h548h59ch558h7ch65fh76hf9hf9h99h5h5fh56bhd9hd7h64h67ch65hb9h65bh76h569h6bhcfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tlx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-5dzhr_openstack(b9bd229b-588d-477e-8501-cd976b539e3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.402215 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-5dzhr" podUID="b9bd229b-588d-477e-8501-cd976b539e3a" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.426984 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.427183 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:ncbh5ffh56dh7chbdh75h58h5d4h5bfh596h576h5ddh7bh86h56dh677h58dh687h66bh676h67ch55ch667h68hf4h78h555h79h5fch67bh95h698q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ldh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(24136f67-aca3-4e40-b3c2-b36b7623475f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.428389 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5: Get \\\"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5\\\": context canceled\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ovsdbserver-nb-0" podUID="24136f67-aca3-4e40-b3c2-b36b7623475f" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.431176 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.431209 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.431291 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zwhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6fc7fbc9b9-cj7zb_openstack(eb0e6ce6-181c-4edb-b4b3-d169c41c63a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.432545 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" podUID="eb0e6ce6-181c-4edb-b4b3-d169c41c63a8" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.449808 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.449868 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.450002 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7nlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5cd6c77d8f-6z4pf_openstack(ef08c5f4-dc05-46a7-bb1b-8039ba0117aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.451197 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" podUID="ef08c5f4-dc05-46a7-bb1b-8039ba0117aa" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.609355 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.609935 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.609997 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovn-controller-metrics-5dzhr" podUID="b9bd229b-588d-477e-8501-cd976b539e3a" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.610381 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest\\\"\"" pod="openstack/ovn-controller-s642n" podUID="256e0b4a-baac-415c-94c6-09f08fa09c7c" Jan 21 11:17:37 crc kubenswrapper[4881]: E0121 11:17:37.268155 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 11:17:37 crc kubenswrapper[4881]: E0121 11:17:37.268898 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 11:17:37 crc kubenswrapper[4881]: E0121 11:17:37.269103 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25992,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(c5b6c25e-e882-4ea4-a284-6f55bfe75093): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 11:17:37 crc kubenswrapper[4881]: E0121 11:17:37.270481 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.454887 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.550180 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zwhs\" (UniqueName: \"kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs\") pod \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.550289 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config\") pod \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.550822 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config" (OuterVolumeSpecName: "config") pod "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8" (UID: "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.553720 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs" (OuterVolumeSpecName: "kube-api-access-6zwhs") pod "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8" (UID: "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8"). InnerVolumeSpecName "kube-api-access-6zwhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.616630 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" event={"ID":"aec91505-d39a-41cf-90af-1593bcb02e68","Type":"ContainerDied","Data":"11e9d0f8032d3e65513f2d8249ce3ac74bc1a4ddfcd269afe6c654eddabc71b8"} Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.616681 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11e9d0f8032d3e65513f2d8249ce3ac74bc1a4ddfcd269afe6c654eddabc71b8" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.618191 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" event={"ID":"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8","Type":"ContainerDied","Data":"8b64289332b9bf6e24ce3af64b2717f89e14cd1b712818252df454ed0a94562c"} Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.618213 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.619876 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" event={"ID":"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338","Type":"ContainerDied","Data":"b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820"} Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.620637 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.621836 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" event={"ID":"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa","Type":"ContainerDied","Data":"385e3ff947423b95dcd5a48ddbdf919434e21551c87e247766e40b37cfc15a72"} Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.621874 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="385e3ff947423b95dcd5a48ddbdf919434e21551c87e247766e40b37cfc15a72" Jan 21 11:17:37 crc kubenswrapper[4881]: E0121 11:17:37.623357 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.651904 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc\") pod \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.652289 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.652306 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zwhs\" (UniqueName: \"kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.652511 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8" (UID: "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.728796 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753146 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc\") pod \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753247 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj4sc\" (UniqueName: \"kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc\") pod \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753271 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config\") pod \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753550 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753797 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338" (UID: "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753947 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config" (OuterVolumeSpecName: "config") pod "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338" (UID: "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.760858 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc" (OuterVolumeSpecName: "kube-api-access-gj4sc") pod "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338" (UID: "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338"). InnerVolumeSpecName "kube-api-access-gj4sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.854611 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj4sc\" (UniqueName: \"kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.854916 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.854929 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.961172 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.010554 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.058359 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config\") pod \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.058684 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnn2q\" (UniqueName: \"kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q\") pod \"aec91505-d39a-41cf-90af-1593bcb02e68\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.058926 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config\") pod \"aec91505-d39a-41cf-90af-1593bcb02e68\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.059011 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7nlz\" (UniqueName: \"kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz\") pod \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.059091 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config" (OuterVolumeSpecName: "config") pod "ef08c5f4-dc05-46a7-bb1b-8039ba0117aa" (UID: "ef08c5f4-dc05-46a7-bb1b-8039ba0117aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.059260 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.060394 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config" (OuterVolumeSpecName: "config") pod "aec91505-d39a-41cf-90af-1593bcb02e68" (UID: "aec91505-d39a-41cf-90af-1593bcb02e68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.060507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc\") pod \"aec91505-d39a-41cf-90af-1593bcb02e68\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.062717 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.062751 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.063512 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aec91505-d39a-41cf-90af-1593bcb02e68" (UID: "aec91505-d39a-41cf-90af-1593bcb02e68"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.067650 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz" (OuterVolumeSpecName: "kube-api-access-z7nlz") pod "ef08c5f4-dc05-46a7-bb1b-8039ba0117aa" (UID: "ef08c5f4-dc05-46a7-bb1b-8039ba0117aa"). InnerVolumeSpecName "kube-api-access-z7nlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:38 crc kubenswrapper[4881]: E0121 11:17:38.070058 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="c3884c64-25d6-42b5-a309-7eafa170719e" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.070564 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.165689 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7nlz\" (UniqueName: \"kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.165730 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.185187 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q" (OuterVolumeSpecName: "kube-api-access-dnn2q") pod "aec91505-d39a-41cf-90af-1593bcb02e68" (UID: "aec91505-d39a-41cf-90af-1593bcb02e68"). InnerVolumeSpecName "kube-api-access-dnn2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.269357 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnn2q\" (UniqueName: \"kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.631127 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cd1973a5-773b-438b-aab7-709fb828324d","Type":"ContainerStarted","Data":"c99268feb4be13da4c28dce5e7226cf0ad72747240ed4a74ebf64b92b1589637"} Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.635249 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"24136f67-aca3-4e40-b3c2-b36b7623475f","Type":"ContainerStarted","Data":"46db8c0233464dda2d06ac7ab4fb2083b484520aa4d757acf2a0f0cfdf7dba09"} Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.635301 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"24136f67-aca3-4e40-b3c2-b36b7623475f","Type":"ContainerStarted","Data":"36a3d53d3d86579821540be368a11ff270a5f9c5df2f78eb854b7b4d9a92c5fc"} Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.639361 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c3884c64-25d6-42b5-a309-7eafa170719e","Type":"ContainerStarted","Data":"bef679e00f68571570a88bad8e19d777782851e71aebb0a71fcd128786dbe4c6"} Jan 21 11:17:38 crc kubenswrapper[4881]: E0121 11:17:38.640504 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="c3884c64-25d6-42b5-a309-7eafa170719e" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.643250 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"197dd5bf-f68a-4d9d-b75c-de87a54ed46b","Type":"ContainerStarted","Data":"66e36374643a43e11b9a7ebef5758dd162f141744e75e5606bc7931a3eae58b2"} Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.643296 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.643377 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.643538 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.739706 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.046302884 podStartE2EDuration="51.739679685s" podCreationTimestamp="2026-01-21 11:16:47 +0000 UTC" firstStartedPulling="2026-01-21 11:16:51.109261447 +0000 UTC m=+1198.369217916" lastFinishedPulling="2026-01-21 11:17:37.802638248 +0000 UTC m=+1245.062594717" observedRunningTime="2026-01-21 11:17:38.736990827 +0000 UTC m=+1245.996947296" watchObservedRunningTime="2026-01-21 11:17:38.739679685 +0000 UTC m=+1245.999636154" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.909101 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.920700 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.971563 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.986401 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.003484 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.010057 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.332828 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338" path="/var/lib/kubelet/pods/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338/volumes" Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.335625 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec91505-d39a-41cf-90af-1593bcb02e68" path="/var/lib/kubelet/pods/aec91505-d39a-41cf-90af-1593bcb02e68/volumes" Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.337076 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0e6ce6-181c-4edb-b4b3-d169c41c63a8" path="/var/lib/kubelet/pods/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8/volumes" Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.339853 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef08c5f4-dc05-46a7-bb1b-8039ba0117aa" path="/var/lib/kubelet/pods/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa/volumes" Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.376209 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 11:17:39 crc kubenswrapper[4881]: E0121 11:17:39.654043 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="c3884c64-25d6-42b5-a309-7eafa170719e" Jan 21 11:17:40 crc kubenswrapper[4881]: I0121 11:17:40.376382 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 11:17:41 crc kubenswrapper[4881]: I0121 11:17:41.668123 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7960c16a-de64-4154-9072-aee49e3bd573","Type":"ContainerStarted","Data":"bf654632f9f8c849b39eb3984824a19d60064f46a9fcc4111fd748206bfe3c81"} Jan 21 11:17:41 crc kubenswrapper[4881]: I0121 11:17:41.670024 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerStarted","Data":"26f697deade0e9783aed3c09129f2f0589fbb10b53e3501c212b7fcc5f5b5d86"} Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.681092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"44bcf219-3358-4596-9d1e-88a51c415266","Type":"ContainerStarted","Data":"49c33a525e9cb9bae99d4cbbbfd17980a01d8ffda81efc8033434da5404beb26"} Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.683595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerStarted","Data":"3d2c36495c41eb6152a1fc9a05412fce52a5f353e0b59004227d5efed6039fb6"} Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.685685 4881 generic.go:334] "Generic (PLEG): container finished" podID="9ff4a63e-40e5-4133-967e-9ba083f3603b" containerID="d08adb83e3199d21288d6a66e8b2fdb972f8aa4b701580661048ab458692f76e" exitCode=0 Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.685754 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2rtl8" event={"ID":"9ff4a63e-40e5-4133-967e-9ba083f3603b","Type":"ContainerDied","Data":"d08adb83e3199d21288d6a66e8b2fdb972f8aa4b701580661048ab458692f76e"} Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.689115 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerStarted","Data":"b30e547e2506fcebf2f8ac627808ad3f0382510a160b2079a570164ee838adfc"} Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.689299 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.763347 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=6.719856925 podStartE2EDuration="1m0.763305656s" podCreationTimestamp="2026-01-21 11:16:42 +0000 UTC" firstStartedPulling="2026-01-21 11:16:44.403327444 +0000 UTC m=+1191.663283913" lastFinishedPulling="2026-01-21 11:17:38.446776175 +0000 UTC m=+1245.706732644" observedRunningTime="2026-01-21 11:17:42.761182443 +0000 UTC m=+1250.021138932" watchObservedRunningTime="2026-01-21 11:17:42.763305656 +0000 UTC m=+1250.023262145" Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.429846 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.473492 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.709745 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2rtl8" event={"ID":"9ff4a63e-40e5-4133-967e-9ba083f3603b","Type":"ContainerStarted","Data":"c71bf5326117e72b17dca906525ab6979082c71793baa1784d2c5afcb9955660"} Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.709823 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2rtl8" event={"ID":"9ff4a63e-40e5-4133-967e-9ba083f3603b","Type":"ContainerStarted","Data":"1f99aca2252816b539bcce6eac5a0cfde8f99abcbc456e54343721aa5860f099"} Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.712617 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.712652 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.812016 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-2rtl8" podStartSLOduration=8.704535212 podStartE2EDuration="55.811991611s" podCreationTimestamp="2026-01-21 11:16:48 +0000 UTC" firstStartedPulling="2026-01-21 11:16:51.339840899 +0000 UTC m=+1198.599797368" lastFinishedPulling="2026-01-21 11:17:38.447297298 +0000 UTC m=+1245.707253767" observedRunningTime="2026-01-21 11:17:43.73543605 +0000 UTC m=+1250.995392519" watchObservedRunningTime="2026-01-21 11:17:43.811991611 +0000 UTC m=+1251.071948080" Jan 21 11:17:46 crc kubenswrapper[4881]: I0121 11:17:46.732163 4881 generic.go:334] "Generic (PLEG): container finished" podID="197dd5bf-f68a-4d9d-b75c-de87a54ed46b" containerID="66e36374643a43e11b9a7ebef5758dd162f141744e75e5606bc7931a3eae58b2" exitCode=0 Jan 21 11:17:46 crc kubenswrapper[4881]: I0121 11:17:46.732268 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"197dd5bf-f68a-4d9d-b75c-de87a54ed46b","Type":"ContainerDied","Data":"66e36374643a43e11b9a7ebef5758dd162f141744e75e5606bc7931a3eae58b2"} Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.742214 4881 generic.go:334] "Generic (PLEG): container finished" podID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerID="1e57b157cf3ee5972a66bda532a4febde866d6c3d74c1e97f0eda2d339b8bfd2" exitCode=0 Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.742291 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" event={"ID":"99aba8a6-cc58-43be-9607-8ae1fcb57257","Type":"ContainerDied","Data":"1e57b157cf3ee5972a66bda532a4febde866d6c3d74c1e97f0eda2d339b8bfd2"} Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.746264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"197dd5bf-f68a-4d9d-b75c-de87a54ed46b","Type":"ContainerStarted","Data":"20fb37ae9dffc2e25ae633ff1ba434f72c1307a7af1496049c2520d4028c8da9"} Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.749026 4881 generic.go:334] "Generic (PLEG): container finished" podID="cd1973a5-773b-438b-aab7-709fb828324d" containerID="c99268feb4be13da4c28dce5e7226cf0ad72747240ed4a74ebf64b92b1589637" exitCode=0 Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.749069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cd1973a5-773b-438b-aab7-709fb828324d","Type":"ContainerDied","Data":"c99268feb4be13da4c28dce5e7226cf0ad72747240ed4a74ebf64b92b1589637"} Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.816993 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=14.325754327 podStartE2EDuration="1m7.816967228s" podCreationTimestamp="2026-01-21 11:16:40 +0000 UTC" firstStartedPulling="2026-01-21 11:16:43.771286439 +0000 UTC m=+1191.031243068" lastFinishedPulling="2026-01-21 11:17:37.2624995 +0000 UTC m=+1244.522455969" observedRunningTime="2026-01-21 11:17:47.808171469 +0000 UTC m=+1255.068127948" watchObservedRunningTime="2026-01-21 11:17:47.816967228 +0000 UTC m=+1255.076923697" Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.245183 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.759856 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cd1973a5-773b-438b-aab7-709fb828324d","Type":"ContainerStarted","Data":"df9cea89f7c13797a23ebce6211650407ff192590f3a5f152f0c4ad0510a66d9"} Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.762314 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" event={"ID":"99aba8a6-cc58-43be-9607-8ae1fcb57257","Type":"ContainerStarted","Data":"3b550ef95b5c642befe5d47915b7748fa9b72e7044ab0f6f21d753c37168b189"} Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.763289 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.792001 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=14.406460386 podStartE2EDuration="1m7.791984545s" podCreationTimestamp="2026-01-21 11:16:41 +0000 UTC" firstStartedPulling="2026-01-21 11:16:43.87696748 +0000 UTC m=+1191.136923949" lastFinishedPulling="2026-01-21 11:17:37.262491639 +0000 UTC m=+1244.522448108" observedRunningTime="2026-01-21 11:17:48.783857801 +0000 UTC m=+1256.043814260" watchObservedRunningTime="2026-01-21 11:17:48.791984545 +0000 UTC m=+1256.051941004" Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.814099 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" podStartSLOduration=3.546233435 podStartE2EDuration="1m10.814077356s" podCreationTimestamp="2026-01-21 11:16:38 +0000 UTC" firstStartedPulling="2026-01-21 11:16:40.112647669 +0000 UTC m=+1187.372604138" lastFinishedPulling="2026-01-21 11:17:47.38049159 +0000 UTC m=+1254.640448059" observedRunningTime="2026-01-21 11:17:48.808260031 +0000 UTC m=+1256.068216500" watchObservedRunningTime="2026-01-21 11:17:48.814077356 +0000 UTC m=+1256.074033835" Jan 21 11:17:49 crc kubenswrapper[4881]: I0121 11:17:49.773436 4881 generic.go:334] "Generic (PLEG): container finished" podID="75733567-f2a6-4331-bdea-147126213437" containerID="3d2c36495c41eb6152a1fc9a05412fce52a5f353e0b59004227d5efed6039fb6" exitCode=0 Jan 21 11:17:49 crc kubenswrapper[4881]: I0121 11:17:49.774879 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerDied","Data":"3d2c36495c41eb6152a1fc9a05412fce52a5f353e0b59004227d5efed6039fb6"} Jan 21 11:17:50 crc kubenswrapper[4881]: I0121 11:17:50.795983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5dzhr" event={"ID":"b9bd229b-588d-477e-8501-cd976b539e3a","Type":"ContainerStarted","Data":"fb18542d1e8bd27716d9eec28470aaccf2304a790f5a134063b4326b705bf1f8"} Jan 21 11:17:50 crc kubenswrapper[4881]: I0121 11:17:50.829752 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5dzhr" podStartSLOduration=-9223371977.025053 podStartE2EDuration="59.829722458s" podCreationTimestamp="2026-01-21 11:16:51 +0000 UTC" firstStartedPulling="2026-01-21 11:17:15.942306475 +0000 UTC m=+1223.202262944" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:17:50.825670527 +0000 UTC m=+1258.085627006" watchObservedRunningTime="2026-01-21 11:17:50.829722458 +0000 UTC m=+1258.089678927" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.456752 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.457012 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="dnsmasq-dns" containerID="cri-o://3b550ef95b5c642befe5d47915b7748fa9b72e7044ab0f6f21d753c37168b189" gracePeriod=10 Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.518337 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.523749 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.528328 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.556287 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.675276 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.675332 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.675354 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.675380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krc4s\" (UniqueName: \"kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.675415 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.777397 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.777524 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.777562 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.777641 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krc4s\" (UniqueName: \"kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.777762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.778289 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.778632 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.779030 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.779093 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.802047 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krc4s\" (UniqueName: \"kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.824428 4881 generic.go:334] "Generic (PLEG): container finished" podID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerID="3b550ef95b5c642befe5d47915b7748fa9b72e7044ab0f6f21d753c37168b189" exitCode=0 Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.824557 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" event={"ID":"99aba8a6-cc58-43be-9607-8ae1fcb57257","Type":"ContainerDied","Data":"3b550ef95b5c642befe5d47915b7748fa9b72e7044ab0f6f21d753c37168b189"} Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.835750 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c3884c64-25d6-42b5-a309-7eafa170719e","Type":"ContainerStarted","Data":"344d4bc77e52408b60bf5a0ceb6757cad2bade731efde2d13b814ff370df019f"} Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.841685 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerStarted","Data":"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8"} Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.891278 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.748487912 podStartE2EDuration="1m1.891251976s" podCreationTimestamp="2026-01-21 11:16:50 +0000 UTC" firstStartedPulling="2026-01-21 11:16:58.381790658 +0000 UTC m=+1205.641747127" lastFinishedPulling="2026-01-21 11:17:51.524554722 +0000 UTC m=+1258.784511191" observedRunningTime="2026-01-21 11:17:51.866350093 +0000 UTC m=+1259.126306562" watchObservedRunningTime="2026-01-21 11:17:51.891251976 +0000 UTC m=+1259.151208445" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.916173 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.986544 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.090152 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gf75\" (UniqueName: \"kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75\") pod \"99aba8a6-cc58-43be-9607-8ae1fcb57257\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.090322 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config\") pod \"99aba8a6-cc58-43be-9607-8ae1fcb57257\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.090382 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc\") pod \"99aba8a6-cc58-43be-9607-8ae1fcb57257\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.096027 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75" (OuterVolumeSpecName: "kube-api-access-4gf75") pod "99aba8a6-cc58-43be-9607-8ae1fcb57257" (UID: "99aba8a6-cc58-43be-9607-8ae1fcb57257"). InnerVolumeSpecName "kube-api-access-4gf75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.103567 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.103600 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.151384 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99aba8a6-cc58-43be-9607-8ae1fcb57257" (UID: "99aba8a6-cc58-43be-9607-8ae1fcb57257"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.165461 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config" (OuterVolumeSpecName: "config") pod "99aba8a6-cc58-43be-9607-8ae1fcb57257" (UID: "99aba8a6-cc58-43be-9607-8ae1fcb57257"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.194078 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gf75\" (UniqueName: \"kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.194106 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.194117 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.333395 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.333748 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.564501 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:17:52 crc kubenswrapper[4881]: W0121 11:17:52.577430 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefbfd001_4602_47b8_8c93_750ee3526e9e.slice/crio-0d2501cc7f927d66e1b692f30c322a8fe23a8259355cb2568f67f16617966fc3 WatchSource:0}: Error finding container 0d2501cc7f927d66e1b692f30c322a8fe23a8259355cb2568f67f16617966fc3: Status 404 returned error can't find the container with id 0d2501cc7f927d66e1b692f30c322a8fe23a8259355cb2568f67f16617966fc3 Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.863169 4881 generic.go:334] "Generic (PLEG): container finished" podID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerID="cdc12a4dbe29fc14fdd129b9c5c90a6d695123d10dd8715736366c33c786a70d" exitCode=0 Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.863466 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" event={"ID":"efbfd001-4602-47b8-8c93-750ee3526e9e","Type":"ContainerDied","Data":"cdc12a4dbe29fc14fdd129b9c5c90a6d695123d10dd8715736366c33c786a70d"} Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.863501 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" event={"ID":"efbfd001-4602-47b8-8c93-750ee3526e9e","Type":"ContainerStarted","Data":"0d2501cc7f927d66e1b692f30c322a8fe23a8259355cb2568f67f16617966fc3"} Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.868147 4881 generic.go:334] "Generic (PLEG): container finished" podID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerID="b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8" exitCode=0 Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.868203 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerDied","Data":"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8"} Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.879380 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.879660 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" event={"ID":"99aba8a6-cc58-43be-9607-8ae1fcb57257","Type":"ContainerDied","Data":"3ca12aa1fc94ac25d568434ebdd78b6fc24b1d504a1ce7b61d9ef849d50cf128"} Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.879721 4881 scope.go:117] "RemoveContainer" containerID="3b550ef95b5c642befe5d47915b7748fa9b72e7044ab0f6f21d753c37168b189" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.913719 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.914132 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.954219 4881 scope.go:117] "RemoveContainer" containerID="1e57b157cf3ee5972a66bda532a4febde866d6c3d74c1e97f0eda2d339b8bfd2" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.955478 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.963317 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:17:53 crc kubenswrapper[4881]: I0121 11:17:53.336368 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" path="/var/lib/kubelet/pods/99aba8a6-cc58-43be-9607-8ae1fcb57257/volumes" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.714290 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.750596 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:17:54 crc kubenswrapper[4881]: E0121 11:17:54.751109 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="dnsmasq-dns" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.751133 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="dnsmasq-dns" Jan 21 11:17:54 crc kubenswrapper[4881]: E0121 11:17:54.751176 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="init" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.751185 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="init" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.751389 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="dnsmasq-dns" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.752510 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.767369 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.882863 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45wlj\" (UniqueName: \"kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.882947 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.883007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.883033 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.883066 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.912754 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerStarted","Data":"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b"} Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.913355 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.913525 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="dnsmasq-dns" containerID="cri-o://9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b" gracePeriod=10 Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.918954 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c5b6c25e-e882-4ea4-a284-6f55bfe75093","Type":"ContainerStarted","Data":"af06053084a285bc01330cffd9858a387580ee179dad2789e77044a776e5acf8"} Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.919149 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.923074 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n" event={"ID":"256e0b4a-baac-415c-94c6-09f08fa09c7c","Type":"ContainerStarted","Data":"6c88c5d2d2b14c7b78f92f6f0ad1feaa59a553b6ad9d1babd50678927c694980"} Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.923615 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-s642n" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.932302 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" event={"ID":"efbfd001-4602-47b8-8c93-750ee3526e9e","Type":"ContainerStarted","Data":"459e19bc99c44fd2c891c741bcf902ef1564b6013c62bfcf04dec268218723e7"} Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.933524 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.956680 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" podStartSLOduration=-9223371973.898119 podStartE2EDuration="1m2.956657s" podCreationTimestamp="2026-01-21 11:16:52 +0000 UTC" firstStartedPulling="2026-01-21 11:17:15.958209002 +0000 UTC m=+1223.218165471" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:17:54.947653135 +0000 UTC m=+1262.207609604" watchObservedRunningTime="2026-01-21 11:17:54.956657 +0000 UTC m=+1262.216613469" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.981060 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" podStartSLOduration=3.98103652 podStartE2EDuration="3.98103652s" podCreationTimestamp="2026-01-21 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:17:54.973166643 +0000 UTC m=+1262.233123112" watchObservedRunningTime="2026-01-21 11:17:54.98103652 +0000 UTC m=+1262.240992989" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.985230 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.985316 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.985446 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45wlj\" (UniqueName: \"kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.985525 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.985619 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.986693 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.986754 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.986760 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.986909 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.995422 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.491627491 podStartE2EDuration="1m10.995403659s" podCreationTimestamp="2026-01-21 11:16:44 +0000 UTC" firstStartedPulling="2026-01-21 11:16:46.235993682 +0000 UTC m=+1193.495950151" lastFinishedPulling="2026-01-21 11:17:52.73976985 +0000 UTC m=+1259.999726319" observedRunningTime="2026-01-21 11:17:54.990135787 +0000 UTC m=+1262.250092256" watchObservedRunningTime="2026-01-21 11:17:54.995403659 +0000 UTC m=+1262.255360128" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.999281 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.011424 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45wlj\" (UniqueName: \"kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.015972 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-s642n" podStartSLOduration=5.307567563 podStartE2EDuration="1m7.015953793s" podCreationTimestamp="2026-01-21 11:16:48 +0000 UTC" firstStartedPulling="2026-01-21 11:16:50.807472635 +0000 UTC m=+1198.067429114" lastFinishedPulling="2026-01-21 11:17:52.515858875 +0000 UTC m=+1259.775815344" observedRunningTime="2026-01-21 11:17:55.011737037 +0000 UTC m=+1262.271693506" watchObservedRunningTime="2026-01-21 11:17:55.015953793 +0000 UTC m=+1262.275910262" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.079291 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.164134 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.405557 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.511978 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.613558 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.628490 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc\") pod \"42132c17-6a2d-48d1-a636-3eae7558d55c\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.628596 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb\") pod \"42132c17-6a2d-48d1-a636-3eae7558d55c\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.628665 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4lhq\" (UniqueName: \"kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq\") pod \"42132c17-6a2d-48d1-a636-3eae7558d55c\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.628734 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config\") pod \"42132c17-6a2d-48d1-a636-3eae7558d55c\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.640208 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq" (OuterVolumeSpecName: "kube-api-access-x4lhq") pod "42132c17-6a2d-48d1-a636-3eae7558d55c" (UID: "42132c17-6a2d-48d1-a636-3eae7558d55c"). InnerVolumeSpecName "kube-api-access-x4lhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.696461 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config" (OuterVolumeSpecName: "config") pod "42132c17-6a2d-48d1-a636-3eae7558d55c" (UID: "42132c17-6a2d-48d1-a636-3eae7558d55c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.699954 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42132c17-6a2d-48d1-a636-3eae7558d55c" (UID: "42132c17-6a2d-48d1-a636-3eae7558d55c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.702174 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42132c17-6a2d-48d1-a636-3eae7558d55c" (UID: "42132c17-6a2d-48d1-a636-3eae7558d55c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.730483 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.730521 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.730533 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4lhq\" (UniqueName: \"kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.730543 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.879225 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 11:17:55 crc kubenswrapper[4881]: E0121 11:17:55.879895 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="dnsmasq-dns" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.880002 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="dnsmasq-dns" Jan 21 11:17:55 crc kubenswrapper[4881]: E0121 11:17:55.880074 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="init" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.880135 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="init" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.880348 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="dnsmasq-dns" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.893197 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.895696 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7r2bh" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.895846 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.896722 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.896847 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.933405 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.972092 4881 generic.go:334] "Generic (PLEG): container finished" podID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerID="9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b" exitCode=0 Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.972167 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerDied","Data":"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b"} Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.972199 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerDied","Data":"0a92f372c9af6d73af85424fa74f5bca2b7445ea9a9d2271fd330b7797ed5b0d"} Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.972220 4881 scope.go:117] "RemoveContainer" containerID="9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.973297 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:55.995097 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" event={"ID":"62435f30-e8fc-4fcd-8b96-4a604439965e","Type":"ContainerStarted","Data":"44f80926337efad13c65101fd501f43ed3467cedbf9bc0293c7241abb38a34e2"} Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.039221 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.039339 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-cache\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.039362 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.039438 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-lock\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.039462 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgc7f\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-kube-api-access-mgc7f\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.079920 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.086688 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.141445 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-cache\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.141502 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.141667 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.141683 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.141686 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-lock\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.141707 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgc7f\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-kube-api-access-mgc7f\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.141728 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:17:56.641710905 +0000 UTC m=+1263.901667374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.141839 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.142034 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-cache\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.142604 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-lock\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.143261 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.175011 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgc7f\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-kube-api-access-mgc7f\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.186033 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.370874 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-v4hkf"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.372394 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.375195 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.375329 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.375287 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.411235 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-v4hkf"] Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.412369 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-cxpzt ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-v4hkf" podUID="7bb59cc6-16e4-4ecf-ab54-d194a079403e" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.421999 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-j29v8"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.424242 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.434997 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j29v8"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448324 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448375 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448445 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448472 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448986 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.449048 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxpzt\" (UniqueName: \"kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.472212 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-v4hkf"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551085 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551152 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551196 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551214 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551231 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551246 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551266 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551307 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551343 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551358 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp4l2\" (UniqueName: \"kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551380 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxpzt\" (UniqueName: \"kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551459 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551475 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551491 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.552174 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.553422 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.553710 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.557674 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.559308 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.569731 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.570618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxpzt\" (UniqueName: \"kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653275 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653324 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653389 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653435 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653457 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653581 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp4l2\" (UniqueName: \"kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653630 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.653797 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.653816 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.653873 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:17:57.653856564 +0000 UTC m=+1264.913813033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.654166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.654509 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.655309 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.657888 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.658257 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.660837 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.671433 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp4l2\" (UniqueName: \"kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.752086 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.003548 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.051688 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.147486 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.163995 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164063 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164089 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164175 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164243 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxpzt\" (UniqueName: \"kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164374 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164433 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.165443 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.167512 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts" (OuterVolumeSpecName: "scripts") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.173587 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.175079 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.176241 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.178193 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.180166 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt" (OuterVolumeSpecName: "kube-api-access-cxpzt") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "kube-api-access-cxpzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267165 4881 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267500 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxpzt\" (UniqueName: \"kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267510 4881 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267555 4881 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267589 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267601 4881 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267618 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.307064 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.348192 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" path="/var/lib/kubelet/pods/42132c17-6a2d-48d1-a636-3eae7558d55c/volumes" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.400374 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.608045 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.610083 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.612883 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.612881 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-675dt" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.613452 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.613508 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.639716 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.689691 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.689808 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-scripts\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.689845 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.689903 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6z87\" (UniqueName: \"kubernetes.io/projected/b3882b01-10ce-4832-ae71-676a8b65b086-kube-api-access-b6z87\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.689938 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-config\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.690104 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.690274 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.690336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: E0121 11:17:57.690413 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:17:57 crc kubenswrapper[4881]: E0121 11:17:57.690438 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:17:57 crc kubenswrapper[4881]: E0121 11:17:57.690498 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:17:59.690475709 +0000 UTC m=+1266.950432178 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792416 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-scripts\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792487 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6z87\" (UniqueName: \"kubernetes.io/projected/b3882b01-10ce-4832-ae71-676a8b65b086-kube-api-access-b6z87\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792594 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-config\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792639 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792700 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.793373 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.793640 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-scripts\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.793728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-config\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.793823 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.798267 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.806100 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.811803 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6z87\" (UniqueName: \"kubernetes.io/projected/b3882b01-10ce-4832-ae71-676a8b65b086-kube-api-access-b6z87\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.813034 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.931461 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 11:17:58 crc kubenswrapper[4881]: I0121 11:17:58.010944 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:58 crc kubenswrapper[4881]: I0121 11:17:58.061163 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-v4hkf"] Jan 21 11:17:58 crc kubenswrapper[4881]: I0121 11:17:58.070745 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-v4hkf"] Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.324358 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb59cc6-16e4-4ecf-ab54-d194a079403e" path="/var/lib/kubelet/pods/7bb59cc6-16e4-4ecf-ab54-d194a079403e/volumes" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.506993 4881 scope.go:117] "RemoveContainer" containerID="b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.733828 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:59 crc kubenswrapper[4881]: E0121 11:17:59.734232 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:17:59 crc kubenswrapper[4881]: E0121 11:17:59.734259 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:17:59 crc kubenswrapper[4881]: E0121 11:17:59.734320 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:18:03.734299995 +0000 UTC m=+1270.994256464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.807507 4881 scope.go:117] "RemoveContainer" containerID="9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b" Jan 21 11:17:59 crc kubenswrapper[4881]: E0121 11:17:59.808371 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b\": container with ID starting with 9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b not found: ID does not exist" containerID="9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.808420 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b"} err="failed to get container status \"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b\": rpc error: code = NotFound desc = could not find container \"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b\": container with ID starting with 9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b not found: ID does not exist" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.808452 4881 scope.go:117] "RemoveContainer" containerID="b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8" Jan 21 11:17:59 crc kubenswrapper[4881]: E0121 11:17:59.808716 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8\": container with ID starting with b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8 not found: ID does not exist" containerID="b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.808739 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8"} err="failed to get container status \"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8\": rpc error: code = NotFound desc = could not find container \"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8\": container with ID starting with b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8 not found: ID does not exist" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.850818 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.850878 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.850949 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.851762 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.851906 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82" gracePeriod=600 Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.033071 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82" exitCode=0 Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.033417 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82"} Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.033457 4881 scope.go:117] "RemoveContainer" containerID="abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.233670 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.353752 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j29v8"] Jan 21 11:18:00 crc kubenswrapper[4881]: W0121 11:18:00.369551 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27451133_57c8_4991_aae0_ec0a82432176.slice/crio-a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a WatchSource:0}: Error finding container a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a: Status 404 returned error can't find the container with id a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.396307 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cp5cl"] Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.398168 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.400493 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.405378 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cp5cl"] Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.450891 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkx6w\" (UniqueName: \"kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.450966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.553892 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkx6w\" (UniqueName: \"kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.553985 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.554943 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.581859 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkx6w\" (UniqueName: \"kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.729913 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.094330 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a"} Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.097104 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j29v8" event={"ID":"27451133-57c8-4991-aae0-ec0a82432176","Type":"ContainerStarted","Data":"a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a"} Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.109043 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3882b01-10ce-4832-ae71-676a8b65b086","Type":"ContainerStarted","Data":"dfc252ec226f016dfb22bc3529cd27daf8610c7a980bdeddd00c7007e0a69959"} Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.116660 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerStarted","Data":"a56efe39870006b796c3201c8dc3334fb4d25c094ef7e6facbf2f393bd54653c"} Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.118834 4881 generic.go:334] "Generic (PLEG): container finished" podID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerID="f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06" exitCode=0 Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.118886 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" event={"ID":"62435f30-e8fc-4fcd-8b96-4a604439965e","Type":"ContainerDied","Data":"f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06"} Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.989088 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.377364 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b4bf-account-create-update-6p74j"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.379529 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.384192 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.415357 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nv8vf"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.416551 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.423777 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b4bf-account-create-update-6p74j"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.439642 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nv8vf"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.512697 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l844\" (UniqueName: \"kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.512865 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.512901 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64bd2\" (UniqueName: \"kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.513142 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.567172 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cp5cl"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.614855 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.614941 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l844\" (UniqueName: \"kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.615004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.615052 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64bd2\" (UniqueName: \"kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.616597 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.616760 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.640335 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64bd2\" (UniqueName: \"kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.644052 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l844\" (UniqueName: \"kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.702307 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.713682 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-smj4g"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.720470 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.725710 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-smj4g"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.746386 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.818430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r258x\" (UniqueName: \"kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.818898 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.921058 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r258x\" (UniqueName: \"kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.921512 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.922838 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.949467 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a34b-account-create-update-hm56c"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.951248 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.956281 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r258x\" (UniqueName: \"kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.956417 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.983016 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a34b-account-create-update-hm56c"] Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.023490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.025215 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7s25\" (UniqueName: \"kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.053483 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-smj4g" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.128926 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7s25\" (UniqueName: \"kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.129044 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.130423 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.148993 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7s25\" (UniqueName: \"kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.184370 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerStarted","Data":"5833adb0117a8d41a669b51e672fa4471dd8e152778ebc0db32735d286328549"} Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.287312 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.743423 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:18:03 crc kubenswrapper[4881]: E0121 11:18:03.743692 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:18:03 crc kubenswrapper[4881]: E0121 11:18:03.744336 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:18:03 crc kubenswrapper[4881]: E0121 11:18:03.744522 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:18:11.74449658 +0000 UTC m=+1279.004453069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.731897 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-gc2qj"] Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.735198 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.740270 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-gc2qj"] Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.867648 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.868093 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw7gw\" (UniqueName: \"kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.942993 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-8d4c-account-create-update-f29tp"] Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.944523 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.948366 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.952245 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.953525 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-8d4c-account-create-update-f29tp"] Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.970102 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw7gw\" (UniqueName: \"kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.970216 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.970878 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.060541 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw7gw\" (UniqueName: \"kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.069750 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.072003 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8rw\" (UniqueName: \"kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.072104 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.174897 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.175080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8rw\" (UniqueName: \"kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.176551 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.205222 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8rw\" (UniqueName: \"kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.268454 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:08 crc kubenswrapper[4881]: W0121 11:18:08.272741 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07845bf5_b5f8_4a00_9d0e_b86f5062f1ec.slice/crio-9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e WatchSource:0}: Error finding container 9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e: Status 404 returned error can't find the container with id 9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e Jan 21 11:18:08 crc kubenswrapper[4881]: I0121 11:18:08.884028 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-gc2qj"] Jan 21 11:18:08 crc kubenswrapper[4881]: I0121 11:18:08.972397 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a34b-account-create-update-hm56c"] Jan 21 11:18:08 crc kubenswrapper[4881]: I0121 11:18:08.983507 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b4bf-account-create-update-6p74j"] Jan 21 11:18:08 crc kubenswrapper[4881]: W0121 11:18:08.986361 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod331fda3a_4e64_4824_abd7_42eaef7b9b4f.slice/crio-276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d WatchSource:0}: Error finding container 276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d: Status 404 returned error can't find the container with id 276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.181507 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nv8vf"] Jan 21 11:18:09 crc kubenswrapper[4881]: W0121 11:18:09.182682 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod317bbc59_5154_4c0e_920a_3227d1ec4982.slice/crio-2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498 WatchSource:0}: Error finding container 2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498: Status 404 returned error can't find the container with id 2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498 Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.242992 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-smj4g"] Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.280708 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-8d4c-account-create-update-f29tp"] Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.302069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cp5cl" event={"ID":"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec","Type":"ContainerStarted","Data":"ce6a2cc0cc6379a9f8ed18cfa5d64954b4b7fdd11d37db77a73b2856418b87db"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.302119 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cp5cl" event={"ID":"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec","Type":"ContainerStarted","Data":"9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.339383 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a34b-account-create-update-hm56c" event={"ID":"1c4be317-c914-45c5-8da4-1fe7d647db7e","Type":"ContainerStarted","Data":"afe8f7c033a7212026d827f9755a996c22dd8a81009d9ff086f6c7998b052858"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.339440 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j29v8" event={"ID":"27451133-57c8-4991-aae0-ec0a82432176","Type":"ContainerStarted","Data":"5534ffef8705672a9dc2dcfe0651ff073211f019174a771251276741f854255a"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.342332 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-cp5cl" podStartSLOduration=9.34231614 podStartE2EDuration="9.34231614s" podCreationTimestamp="2026-01-21 11:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:09.324989658 +0000 UTC m=+1276.584946137" watchObservedRunningTime="2026-01-21 11:18:09.34231614 +0000 UTC m=+1276.602272599" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.359734 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nv8vf" event={"ID":"317bbc59-5154-4c0e-920a-3227d1ec4982","Type":"ContainerStarted","Data":"2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.371746 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3882b01-10ce-4832-ae71-676a8b65b086","Type":"ContainerStarted","Data":"e650113f6eb63d8248286db4439fd2bedd5a37053b0d0504f1ef297251b2857e"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.371796 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3882b01-10ce-4832-ae71-676a8b65b086","Type":"ContainerStarted","Data":"8f149eb598e6f19a2fd3b5a35108a80539fb645cee3285c2ced977b3e69057dc"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.372816 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.373924 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-gc2qj" event={"ID":"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e","Type":"ContainerStarted","Data":"58c871aeff72223fb977bc5b168401e1ae43b57006b7711f7f615f35566c1421"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.390296 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" event={"ID":"62435f30-e8fc-4fcd-8b96-4a604439965e","Type":"ContainerStarted","Data":"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.390797 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-j29v8" podStartSLOduration=5.350123515 podStartE2EDuration="13.390758421s" podCreationTimestamp="2026-01-21 11:17:56 +0000 UTC" firstStartedPulling="2026-01-21 11:18:00.372689178 +0000 UTC m=+1267.632645647" lastFinishedPulling="2026-01-21 11:18:08.413324084 +0000 UTC m=+1275.673280553" observedRunningTime="2026-01-21 11:18:09.359332065 +0000 UTC m=+1276.619288544" watchObservedRunningTime="2026-01-21 11:18:09.390758421 +0000 UTC m=+1276.650714890" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.391163 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.397526 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b4bf-account-create-update-6p74j" event={"ID":"331fda3a-4e64-4824-abd7-42eaef7b9b4f","Type":"ContainerStarted","Data":"276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.419422 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.299634443 podStartE2EDuration="12.419396917s" podCreationTimestamp="2026-01-21 11:17:57 +0000 UTC" firstStartedPulling="2026-01-21 11:18:00.235396177 +0000 UTC m=+1267.495352646" lastFinishedPulling="2026-01-21 11:18:08.355158651 +0000 UTC m=+1275.615115120" observedRunningTime="2026-01-21 11:18:09.410402562 +0000 UTC m=+1276.670359051" watchObservedRunningTime="2026-01-21 11:18:09.419396917 +0000 UTC m=+1276.679353386" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.444456 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-gc2qj" podStartSLOduration=5.444425452 podStartE2EDuration="5.444425452s" podCreationTimestamp="2026-01-21 11:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:09.433175761 +0000 UTC m=+1276.693132250" watchObservedRunningTime="2026-01-21 11:18:09.444425452 +0000 UTC m=+1276.704381921" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.457989 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" podStartSLOduration=15.457972431 podStartE2EDuration="15.457972431s" podCreationTimestamp="2026-01-21 11:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:09.451327295 +0000 UTC m=+1276.711283764" watchObservedRunningTime="2026-01-21 11:18:09.457972431 +0000 UTC m=+1276.717928900" Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.409956 4881 generic.go:334] "Generic (PLEG): container finished" podID="5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" containerID="d8dd72ec74cb8c65a23a4d5b59b35333d8b4f0429542fb48634decd408b21787" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.410245 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-gc2qj" event={"ID":"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e","Type":"ContainerDied","Data":"d8dd72ec74cb8c65a23a4d5b59b35333d8b4f0429542fb48634decd408b21787"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.416674 4881 generic.go:334] "Generic (PLEG): container finished" podID="331fda3a-4e64-4824-abd7-42eaef7b9b4f" containerID="5dc89d3192dccc5bebeec553b9ca36f3b56735830fa2f8fae09494c5f8979443" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.416830 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b4bf-account-create-update-6p74j" event={"ID":"331fda3a-4e64-4824-abd7-42eaef7b9b4f","Type":"ContainerDied","Data":"5dc89d3192dccc5bebeec553b9ca36f3b56735830fa2f8fae09494c5f8979443"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.418407 4881 generic.go:334] "Generic (PLEG): container finished" podID="07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" containerID="ce6a2cc0cc6379a9f8ed18cfa5d64954b4b7fdd11d37db77a73b2856418b87db" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.418496 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cp5cl" event={"ID":"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec","Type":"ContainerDied","Data":"ce6a2cc0cc6379a9f8ed18cfa5d64954b4b7fdd11d37db77a73b2856418b87db"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.419621 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c4be317-c914-45c5-8da4-1fe7d647db7e" containerID="08a0b7dafd2179b30f57680020c59d606fe75966918c8bb86686a6dacf5de9ff" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.419738 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a34b-account-create-update-hm56c" event={"ID":"1c4be317-c914-45c5-8da4-1fe7d647db7e","Type":"ContainerDied","Data":"08a0b7dafd2179b30f57680020c59d606fe75966918c8bb86686a6dacf5de9ff"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.422670 4881 generic.go:334] "Generic (PLEG): container finished" podID="317bbc59-5154-4c0e-920a-3227d1ec4982" containerID="8b53d4f0258b883730ea2ab9cbc22ea1275e34223ca52f3ff089755ba0514b17" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.422723 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nv8vf" event={"ID":"317bbc59-5154-4c0e-920a-3227d1ec4982","Type":"ContainerDied","Data":"8b53d4f0258b883730ea2ab9cbc22ea1275e34223ca52f3ff089755ba0514b17"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.428620 4881 generic.go:334] "Generic (PLEG): container finished" podID="b6a422f0-bb4b-442c-a2d7-96ac90ffde83" containerID="8e69c6e6b0d6f76b9304a07ebd26d806a9e9908cc09c50913b96d416ca2b1454" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.428716 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-smj4g" event={"ID":"b6a422f0-bb4b-442c-a2d7-96ac90ffde83","Type":"ContainerDied","Data":"8e69c6e6b0d6f76b9304a07ebd26d806a9e9908cc09c50913b96d416ca2b1454"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.428753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-smj4g" event={"ID":"b6a422f0-bb4b-442c-a2d7-96ac90ffde83","Type":"ContainerStarted","Data":"44bbcef1140bc7525d4deb943d4b8475b95e76e49e944932a4346bc691fe09f4"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.437182 4881 generic.go:334] "Generic (PLEG): container finished" podID="13ea4f5c-fa1d-485c-80b3-a260d8725e81" containerID="9ae9aa24bb02508282163c868da5d6ab7a85e49192dbd35ecea2bbccdab0b150" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.438136 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-8d4c-account-create-update-f29tp" event={"ID":"13ea4f5c-fa1d-485c-80b3-a260d8725e81","Type":"ContainerDied","Data":"9ae9aa24bb02508282163c868da5d6ab7a85e49192dbd35ecea2bbccdab0b150"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.438259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-8d4c-account-create-update-f29tp" event={"ID":"13ea4f5c-fa1d-485c-80b3-a260d8725e81","Type":"ContainerStarted","Data":"3f180f71b7e84f243dc0e8ce19590c31eb5697d4c0625c36de20a7e3a9598f3a"} Jan 21 11:18:11 crc kubenswrapper[4881]: I0121 11:18:11.760904 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:18:11 crc kubenswrapper[4881]: E0121 11:18:11.761095 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:18:11 crc kubenswrapper[4881]: E0121 11:18:11.761436 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:18:11 crc kubenswrapper[4881]: E0121 11:18:11.761498 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:18:27.761478906 +0000 UTC m=+1295.021435375 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.347263 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.351994 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.595640 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s642n-config-dk4k8"] Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.602940 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.607069 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.610748 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s642n-config-dk4k8"] Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.829879 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.829960 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qsnq\" (UniqueName: \"kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.829992 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.830046 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.830191 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.830480 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.933850 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.933941 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qsnq\" (UniqueName: \"kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.933985 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934039 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934088 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934151 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934495 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934536 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.935225 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.936843 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.959939 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qsnq\" (UniqueName: \"kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.041217 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.088117 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.166477 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.167264 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="dnsmasq-dns" containerID="cri-o://459e19bc99c44fd2c891c741bcf902ef1564b6013c62bfcf04dec268218723e7" gracePeriod=10 Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.482171 4881 generic.go:334] "Generic (PLEG): container finished" podID="44bcf219-3358-4596-9d1e-88a51c415266" containerID="49c33a525e9cb9bae99d4cbbbfd17980a01d8ffda81efc8033434da5404beb26" exitCode=0 Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.482241 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"44bcf219-3358-4596-9d1e-88a51c415266","Type":"ContainerDied","Data":"49c33a525e9cb9bae99d4cbbbfd17980a01d8ffda81efc8033434da5404beb26"} Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.484411 4881 generic.go:334] "Generic (PLEG): container finished" podID="078c2368-b247-49d4-8723-fd93918e99b1" containerID="26f697deade0e9783aed3c09129f2f0589fbb10b53e3501c212b7fcc5f5b5d86" exitCode=0 Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.484480 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerDied","Data":"26f697deade0e9783aed3c09129f2f0589fbb10b53e3501c212b7fcc5f5b5d86"} Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.486000 4881 generic.go:334] "Generic (PLEG): container finished" podID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerID="b30e547e2506fcebf2f8ac627808ad3f0382510a160b2079a570164ee838adfc" exitCode=0 Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.486035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerDied","Data":"b30e547e2506fcebf2f8ac627808ad3f0382510a160b2079a570164ee838adfc"} Jan 21 11:18:16 crc kubenswrapper[4881]: I0121 11:18:16.987306 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: connect: connection refused" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.397661 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.440117 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.471181 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-smj4g" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.485182 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.502612 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.518305 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.519028 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a34b-account-create-update-hm56c" event={"ID":"1c4be317-c914-45c5-8da4-1fe7d647db7e","Type":"ContainerDied","Data":"afe8f7c033a7212026d827f9755a996c22dd8a81009d9ff086f6c7998b052858"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.519059 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe8f7c033a7212026d827f9755a996c22dd8a81009d9ff086f6c7998b052858" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.519100 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.519756 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.520701 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nv8vf" event={"ID":"317bbc59-5154-4c0e-920a-3227d1ec4982","Type":"ContainerDied","Data":"2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.520730 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.520772 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.523160 4881 generic.go:334] "Generic (PLEG): container finished" podID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerID="459e19bc99c44fd2c891c741bcf902ef1564b6013c62bfcf04dec268218723e7" exitCode=0 Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.523223 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" event={"ID":"efbfd001-4602-47b8-8c93-750ee3526e9e","Type":"ContainerDied","Data":"459e19bc99c44fd2c891c741bcf902ef1564b6013c62bfcf04dec268218723e7"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.524647 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-smj4g" event={"ID":"b6a422f0-bb4b-442c-a2d7-96ac90ffde83","Type":"ContainerDied","Data":"44bbcef1140bc7525d4deb943d4b8475b95e76e49e944932a4346bc691fe09f4"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.524675 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44bbcef1140bc7525d4deb943d4b8475b95e76e49e944932a4346bc691fe09f4" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.524729 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-smj4g" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.527697 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.527710 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-8d4c-account-create-update-f29tp" event={"ID":"13ea4f5c-fa1d-485c-80b3-a260d8725e81","Type":"ContainerDied","Data":"3f180f71b7e84f243dc0e8ce19590c31eb5697d4c0625c36de20a7e3a9598f3a"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.527834 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f180f71b7e84f243dc0e8ce19590c31eb5697d4c0625c36de20a7e3a9598f3a" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.532617 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts\") pod \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.533048 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkx6w\" (UniqueName: \"kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w\") pod \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.534198 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" (UID: "07845bf5-b5f8-4a00-9d0e-b86f5062f1ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.540878 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w" (OuterVolumeSpecName: "kube-api-access-lkx6w") pod "07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" (UID: "07845bf5-b5f8-4a00-9d0e-b86f5062f1ec"). InnerVolumeSpecName "kube-api-access-lkx6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.550915 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-gc2qj" event={"ID":"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e","Type":"ContainerDied","Data":"58c871aeff72223fb977bc5b168401e1ae43b57006b7711f7f615f35566c1421"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.550965 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c871aeff72223fb977bc5b168401e1ae43b57006b7711f7f615f35566c1421" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.551050 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.553295 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b4bf-account-create-update-6p74j" event={"ID":"331fda3a-4e64-4824-abd7-42eaef7b9b4f","Type":"ContainerDied","Data":"276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.553342 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.553408 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.554874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cp5cl" event={"ID":"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec","Type":"ContainerDied","Data":"9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.554903 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.554977 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts\") pod \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635283 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts\") pod \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635315 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw7gw\" (UniqueName: \"kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw\") pod \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635377 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64bd2\" (UniqueName: \"kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2\") pod \"317bbc59-5154-4c0e-920a-3227d1ec4982\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635462 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l844\" (UniqueName: \"kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844\") pod \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635496 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7s25\" (UniqueName: \"kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25\") pod \"1c4be317-c914-45c5-8da4-1fe7d647db7e\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635532 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts\") pod \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635641 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv8rw\" (UniqueName: \"kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw\") pod \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635692 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r258x\" (UniqueName: \"kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x\") pod \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635736 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts\") pod \"1c4be317-c914-45c5-8da4-1fe7d647db7e\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635773 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts\") pod \"317bbc59-5154-4c0e-920a-3227d1ec4982\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635816 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts\") pod \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636121 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13ea4f5c-fa1d-485c-80b3-a260d8725e81" (UID: "13ea4f5c-fa1d-485c-80b3-a260d8725e81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636600 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkx6w\" (UniqueName: \"kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636617 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636626 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636655 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c4be317-c914-45c5-8da4-1fe7d647db7e" (UID: "1c4be317-c914-45c5-8da4-1fe7d647db7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636841 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "331fda3a-4e64-4824-abd7-42eaef7b9b4f" (UID: "331fda3a-4e64-4824-abd7-42eaef7b9b4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636962 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "317bbc59-5154-4c0e-920a-3227d1ec4982" (UID: "317bbc59-5154-4c0e-920a-3227d1ec4982"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.637299 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6a422f0-bb4b-442c-a2d7-96ac90ffde83" (UID: "b6a422f0-bb4b-442c-a2d7-96ac90ffde83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.637550 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" (UID: "5ecc1262-3ebf-4a17-bc42-507ce55f6d7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.644078 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x" (OuterVolumeSpecName: "kube-api-access-r258x") pod "b6a422f0-bb4b-442c-a2d7-96ac90ffde83" (UID: "b6a422f0-bb4b-442c-a2d7-96ac90ffde83"). InnerVolumeSpecName "kube-api-access-r258x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.644197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw" (OuterVolumeSpecName: "kube-api-access-zw7gw") pod "5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" (UID: "5ecc1262-3ebf-4a17-bc42-507ce55f6d7e"). InnerVolumeSpecName "kube-api-access-zw7gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.644252 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw" (OuterVolumeSpecName: "kube-api-access-gv8rw") pod "13ea4f5c-fa1d-485c-80b3-a260d8725e81" (UID: "13ea4f5c-fa1d-485c-80b3-a260d8725e81"). InnerVolumeSpecName "kube-api-access-gv8rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.644272 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844" (OuterVolumeSpecName: "kube-api-access-2l844") pod "331fda3a-4e64-4824-abd7-42eaef7b9b4f" (UID: "331fda3a-4e64-4824-abd7-42eaef7b9b4f"). InnerVolumeSpecName "kube-api-access-2l844". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.648659 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2" (OuterVolumeSpecName: "kube-api-access-64bd2") pod "317bbc59-5154-4c0e-920a-3227d1ec4982" (UID: "317bbc59-5154-4c0e-920a-3227d1ec4982"). InnerVolumeSpecName "kube-api-access-64bd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.649435 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25" (OuterVolumeSpecName: "kube-api-access-h7s25") pod "1c4be317-c914-45c5-8da4-1fe7d647db7e" (UID: "1c4be317-c914-45c5-8da4-1fe7d647db7e"). InnerVolumeSpecName "kube-api-access-h7s25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.787949 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l844\" (UniqueName: \"kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.787991 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7s25\" (UniqueName: \"kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788007 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788020 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv8rw\" (UniqueName: \"kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788033 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r258x\" (UniqueName: \"kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788043 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788055 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788066 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788078 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788091 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw7gw\" (UniqueName: \"kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788101 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64bd2\" (UniqueName: \"kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.880407 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.965496 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s642n-config-dk4k8"] Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.996124 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb\") pod \"efbfd001-4602-47b8-8c93-750ee3526e9e\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.996191 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config\") pod \"efbfd001-4602-47b8-8c93-750ee3526e9e\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:17.996277 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc\") pod \"efbfd001-4602-47b8-8c93-750ee3526e9e\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:17.996320 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krc4s\" (UniqueName: \"kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s\") pod \"efbfd001-4602-47b8-8c93-750ee3526e9e\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:17.996361 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb\") pod \"efbfd001-4602-47b8-8c93-750ee3526e9e\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.001897 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s" (OuterVolumeSpecName: "kube-api-access-krc4s") pod "efbfd001-4602-47b8-8c93-750ee3526e9e" (UID: "efbfd001-4602-47b8-8c93-750ee3526e9e"). InnerVolumeSpecName "kube-api-access-krc4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.043754 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "efbfd001-4602-47b8-8c93-750ee3526e9e" (UID: "efbfd001-4602-47b8-8c93-750ee3526e9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.049304 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "efbfd001-4602-47b8-8c93-750ee3526e9e" (UID: "efbfd001-4602-47b8-8c93-750ee3526e9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.052640 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "efbfd001-4602-47b8-8c93-750ee3526e9e" (UID: "efbfd001-4602-47b8-8c93-750ee3526e9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.057327 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config" (OuterVolumeSpecName: "config") pod "efbfd001-4602-47b8-8c93-750ee3526e9e" (UID: "efbfd001-4602-47b8-8c93-750ee3526e9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.099053 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.099090 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.099102 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.099111 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krc4s\" (UniqueName: \"kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.099124 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.566402 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerStarted","Data":"8a0e4e5a99ef920688a0d7a6463ea9c0a7db6ff987fcbf667df0b4f98b3356bf"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.567004 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.568447 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n-config-dk4k8" event={"ID":"bb419db7-7bc4-473f-a1ea-7878c6cc7cee","Type":"ContainerStarted","Data":"4b32abc6871e628e297cbe463288501e5adf49f03da08854de77bfb91714eedb"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.568516 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n-config-dk4k8" event={"ID":"bb419db7-7bc4-473f-a1ea-7878c6cc7cee","Type":"ContainerStarted","Data":"17b36a4727f2f30052334d778af9941aaf1d732632d15b0eb264bc2a85ccdbb5"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.570733 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"44bcf219-3358-4596-9d1e-88a51c415266","Type":"ContainerStarted","Data":"c5853aef3fb2571c98cb61a06c87c41306574ddbfbed106da2329564ad9cdd0c"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.571007 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.573171 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerStarted","Data":"023f57aba22657f38c9822a9fcfbabd9eb5513e10f1d131208e251a7df31b2a0"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.573610 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.575956 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerStarted","Data":"2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.579327 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" event={"ID":"efbfd001-4602-47b8-8c93-750ee3526e9e","Type":"ContainerDied","Data":"0d2501cc7f927d66e1b692f30c322a8fe23a8259355cb2568f67f16617966fc3"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.579375 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.579382 4881 scope.go:117] "RemoveContainer" containerID="459e19bc99c44fd2c891c741bcf902ef1564b6013c62bfcf04dec268218723e7" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.606254 4881 scope.go:117] "RemoveContainer" containerID="cdc12a4dbe29fc14fdd129b9c5c90a6d695123d10dd8715736366c33c786a70d" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.629535 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371936.225256 podStartE2EDuration="1m40.62952031s" podCreationTimestamp="2026-01-21 11:16:38 +0000 UTC" firstStartedPulling="2026-01-21 11:16:43.050484476 +0000 UTC m=+1190.310440945" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:18.62351451 +0000 UTC m=+1285.883470979" watchObservedRunningTime="2026-01-21 11:18:18.62952031 +0000 UTC m=+1285.889476779" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.706465 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=45.461615508 podStartE2EDuration="1m40.706442242s" podCreationTimestamp="2026-01-21 11:16:38 +0000 UTC" firstStartedPulling="2026-01-21 11:16:41.35837995 +0000 UTC m=+1188.618336419" lastFinishedPulling="2026-01-21 11:17:36.603206684 +0000 UTC m=+1243.863163153" observedRunningTime="2026-01-21 11:18:18.701343494 +0000 UTC m=+1285.961299963" watchObservedRunningTime="2026-01-21 11:18:18.706442242 +0000 UTC m=+1285.966398731" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.758403 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=5.063499534 podStartE2EDuration="1m34.75837752s" podCreationTimestamp="2026-01-21 11:16:44 +0000 UTC" firstStartedPulling="2026-01-21 11:16:48.167169064 +0000 UTC m=+1195.427125533" lastFinishedPulling="2026-01-21 11:18:17.86204705 +0000 UTC m=+1285.122003519" observedRunningTime="2026-01-21 11:18:18.750521483 +0000 UTC m=+1286.010477952" watchObservedRunningTime="2026-01-21 11:18:18.75837752 +0000 UTC m=+1286.018333999" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.818376 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=-9223371937.036423 podStartE2EDuration="1m39.818353829s" podCreationTimestamp="2026-01-21 11:16:39 +0000 UTC" firstStartedPulling="2026-01-21 11:16:43.010412195 +0000 UTC m=+1190.270368664" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:18.790134044 +0000 UTC m=+1286.050090513" watchObservedRunningTime="2026-01-21 11:18:18.818353829 +0000 UTC m=+1286.078310298" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.839606 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-s642n-config-dk4k8" podStartSLOduration=4.839588069 podStartE2EDuration="4.839588069s" podCreationTimestamp="2026-01-21 11:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:18.823168398 +0000 UTC m=+1286.083124867" watchObservedRunningTime="2026-01-21 11:18:18.839588069 +0000 UTC m=+1286.099544538" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.844812 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.854294 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:18:19 crc kubenswrapper[4881]: I0121 11:18:19.326315 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" path="/var/lib/kubelet/pods/efbfd001-4602-47b8-8c93-750ee3526e9e/volumes" Jan 21 11:18:19 crc kubenswrapper[4881]: I0121 11:18:19.591279 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb419db7-7bc4-473f-a1ea-7878c6cc7cee" containerID="4b32abc6871e628e297cbe463288501e5adf49f03da08854de77bfb91714eedb" exitCode=0 Jan 21 11:18:19 crc kubenswrapper[4881]: I0121 11:18:19.591319 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n-config-dk4k8" event={"ID":"bb419db7-7bc4-473f-a1ea-7878c6cc7cee","Type":"ContainerDied","Data":"4b32abc6871e628e297cbe463288501e5adf49f03da08854de77bfb91714eedb"} Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.072896 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.184987 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qsnq\" (UniqueName: \"kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.185056 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.185142 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.185223 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.185289 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.185344 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.186053 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.187253 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.187349 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run" (OuterVolumeSpecName: "var-run") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.188224 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.188523 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts" (OuterVolumeSpecName: "scripts") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.194081 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq" (OuterVolumeSpecName: "kube-api-access-2qsnq") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "kube-api-access-2qsnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287825 4881 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287875 4881 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287889 4881 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287901 4881 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287915 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qsnq\" (UniqueName: \"kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287930 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.325378 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cp5cl"] Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.330538 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cp5cl"] Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.612916 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n-config-dk4k8" event={"ID":"bb419db7-7bc4-473f-a1ea-7878c6cc7cee","Type":"ContainerDied","Data":"17b36a4727f2f30052334d778af9941aaf1d732632d15b0eb264bc2a85ccdbb5"} Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.612978 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17b36a4727f2f30052334d778af9941aaf1d732632d15b0eb264bc2a85ccdbb5" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.613064 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.624712 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:22 crc kubenswrapper[4881]: I0121 11:18:22.368007 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-s642n-config-dk4k8"] Jan 21 11:18:22 crc kubenswrapper[4881]: I0121 11:18:22.386843 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-s642n-config-dk4k8"] Jan 21 11:18:23 crc kubenswrapper[4881]: I0121 11:18:23.019286 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 11:18:23 crc kubenswrapper[4881]: I0121 11:18:23.323035 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" path="/var/lib/kubelet/pods/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec/volumes" Jan 21 11:18:23 crc kubenswrapper[4881]: I0121 11:18:23.323752 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb419db7-7bc4-473f-a1ea-7878c6cc7cee" path="/var/lib/kubelet/pods/bb419db7-7bc4-473f-a1ea-7878c6cc7cee/volumes" Jan 21 11:18:24 crc kubenswrapper[4881]: I0121 11:18:24.362356 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-s642n" Jan 21 11:18:25 crc kubenswrapper[4881]: I0121 11:18:25.647375 4881 generic.go:334] "Generic (PLEG): container finished" podID="27451133-57c8-4991-aae0-ec0a82432176" containerID="5534ffef8705672a9dc2dcfe0651ff073211f019174a771251276741f854255a" exitCode=0 Jan 21 11:18:25 crc kubenswrapper[4881]: I0121 11:18:25.647413 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j29v8" event={"ID":"27451133-57c8-4991-aae0-ec0a82432176","Type":"ContainerDied","Data":"5534ffef8705672a9dc2dcfe0651ff073211f019174a771251276741f854255a"} Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.325603 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-n9992"] Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326098 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317bbc59-5154-4c0e-920a-3227d1ec4982" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326125 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="317bbc59-5154-4c0e-920a-3227d1ec4982" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326141 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326150 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326169 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb419db7-7bc4-473f-a1ea-7878c6cc7cee" containerName="ovn-config" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326177 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb419db7-7bc4-473f-a1ea-7878c6cc7cee" containerName="ovn-config" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326200 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="init" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326208 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="init" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326224 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ea4f5c-fa1d-485c-80b3-a260d8725e81" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326231 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ea4f5c-fa1d-485c-80b3-a260d8725e81" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326241 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326248 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326265 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331fda3a-4e64-4824-abd7-42eaef7b9b4f" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326273 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="331fda3a-4e64-4824-abd7-42eaef7b9b4f" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326285 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a422f0-bb4b-442c-a2d7-96ac90ffde83" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326292 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a422f0-bb4b-442c-a2d7-96ac90ffde83" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326301 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="dnsmasq-dns" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326309 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="dnsmasq-dns" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326321 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4be317-c914-45c5-8da4-1fe7d647db7e" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326328 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4be317-c914-45c5-8da4-1fe7d647db7e" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326546 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="331fda3a-4e64-4824-abd7-42eaef7b9b4f" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326561 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="317bbc59-5154-4c0e-920a-3227d1ec4982" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326577 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326593 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c4be317-c914-45c5-8da4-1fe7d647db7e" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326609 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb419db7-7bc4-473f-a1ea-7878c6cc7cee" containerName="ovn-config" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326621 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326632 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ea4f5c-fa1d-485c-80b3-a260d8725e81" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326644 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6a422f0-bb4b-442c-a2d7-96ac90ffde83" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326653 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="dnsmasq-dns" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.327370 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.330802 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.408141 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-n9992"] Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.446713 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.446878 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6l98\" (UniqueName: \"kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.548570 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.548696 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6l98\" (UniqueName: \"kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.549616 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.573054 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6l98\" (UniqueName: \"kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.726527 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n9992" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.155029 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203442 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203564 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203610 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203672 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203699 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp4l2\" (UniqueName: \"kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203774 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203829 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.204677 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.204827 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.209578 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2" (OuterVolumeSpecName: "kube-api-access-fp4l2") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "kube-api-access-fp4l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.220816 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.233540 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.234114 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.235186 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts" (OuterVolumeSpecName: "scripts") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305450 4881 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305707 4881 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305716 4881 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305727 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp4l2\" (UniqueName: \"kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305737 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305747 4881 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305767 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.322223 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-n9992"] Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.724902 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.725411 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j29v8" event={"ID":"27451133-57c8-4991-aae0-ec0a82432176","Type":"ContainerDied","Data":"a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a"} Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.725582 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.727294 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n9992" event={"ID":"70a2b37a-049a-45a1-aeb5-6b7d5515dd69","Type":"ContainerStarted","Data":"6e182625c740cd9b27db99777efb40afa19b03bf59089a6dcf471f48c90169e9"} Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.815206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.825532 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:18:28 crc kubenswrapper[4881]: I0121 11:18:28.016640 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 11:18:29 crc kubenswrapper[4881]: W0121 11:18:29.380568 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeafb725b_4d8c_44b6_8966_4c611d4897d8.slice/crio-fa371e25057562bc0967926609e1375457c656d723443fd8c191eb196655406f WatchSource:0}: Error finding container fa371e25057562bc0967926609e1375457c656d723443fd8c191eb196655406f: Status 404 returned error can't find the container with id fa371e25057562bc0967926609e1375457c656d723443fd8c191eb196655406f Jan 21 11:18:29 crc kubenswrapper[4881]: I0121 11:18:29.397536 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.157303 4881 generic.go:334] "Generic (PLEG): container finished" podID="70a2b37a-049a-45a1-aeb5-6b7d5515dd69" containerID="0287622c020081ba9c95095872909db810663fe9347d92c3e84d5f5ddca8090f" exitCode=0 Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.157403 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n9992" event={"ID":"70a2b37a-049a-45a1-aeb5-6b7d5515dd69","Type":"ContainerDied","Data":"0287622c020081ba9c95095872909db810663fe9347d92c3e84d5f5ddca8090f"} Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.159323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"fa371e25057562bc0967926609e1375457c656d723443fd8c191eb196655406f"} Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.192330 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.480508 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.588757 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="44bcf219-3358-4596-9d1e-88a51c415266" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.258386 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"4fd185e130e69b2415f699558b7acc78898a4578573fcf0ee5fd93c9eb52f9a9"} Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.258443 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"16fbd2cda89c78dca24b01be6b4a2ae3db901547bd215d6b3e425bcb0a7650ed"} Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.258456 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"8a7eba6c367beaedd5f9a7ebe117fff18116464f09dfc1c8fe21415f39dc26bf"} Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.625218 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.628886 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.701595 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n9992" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.857385 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6l98\" (UniqueName: \"kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98\") pod \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.857547 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts\") pod \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.858168 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70a2b37a-049a-45a1-aeb5-6b7d5515dd69" (UID: "70a2b37a-049a-45a1-aeb5-6b7d5515dd69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.869946 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98" (OuterVolumeSpecName: "kube-api-access-k6l98") pod "70a2b37a-049a-45a1-aeb5-6b7d5515dd69" (UID: "70a2b37a-049a-45a1-aeb5-6b7d5515dd69"). InnerVolumeSpecName "kube-api-access-k6l98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.960154 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6l98\" (UniqueName: \"kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.960200 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:32 crc kubenswrapper[4881]: I0121 11:18:32.269298 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n9992" event={"ID":"70a2b37a-049a-45a1-aeb5-6b7d5515dd69","Type":"ContainerDied","Data":"6e182625c740cd9b27db99777efb40afa19b03bf59089a6dcf471f48c90169e9"} Jan 21 11:18:32 crc kubenswrapper[4881]: I0121 11:18:32.269641 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e182625c740cd9b27db99777efb40afa19b03bf59089a6dcf471f48c90169e9" Jan 21 11:18:32 crc kubenswrapper[4881]: I0121 11:18:32.269355 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n9992" Jan 21 11:18:32 crc kubenswrapper[4881]: I0121 11:18:32.272021 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"30ef45750c7a8247839a15ff79716a9275f85fec09fa57057b4125239f19114b"} Jan 21 11:18:32 crc kubenswrapper[4881]: I0121 11:18:32.278908 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:33 crc kubenswrapper[4881]: I0121 11:18:33.301893 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"d3c4e2bdeaf341c15b75402994fe952ffbca5d0b9516cc44904770c1c4df18e7"} Jan 21 11:18:33 crc kubenswrapper[4881]: I0121 11:18:33.302264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"a2b0caec7793e742605110a061597cc5066635faf6282964b3a3687b1511e3bd"} Jan 21 11:18:33 crc kubenswrapper[4881]: I0121 11:18:33.302279 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"9a8f8fb1f0e137ee1f5de2fb461ffc3df0553ae1fb3bbcc4b17b9b6c66fa13e8"} Jan 21 11:18:33 crc kubenswrapper[4881]: I0121 11:18:33.302290 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"3c3eda95e085d4311b0544201ae61db17d63fb863fdd6190b85822634f42ecd9"} Jan 21 11:18:35 crc kubenswrapper[4881]: I0121 11:18:35.469614 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"2f05a62c3bd278bf78c1161a5e27081796317c3b2794a6ecf5faa3095cf831c5"} Jan 21 11:18:35 crc kubenswrapper[4881]: I0121 11:18:35.533322 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:35 crc kubenswrapper[4881]: I0121 11:18:35.533923 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="thanos-sidecar" containerID="cri-o://2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986" gracePeriod=600 Jan 21 11:18:35 crc kubenswrapper[4881]: I0121 11:18:35.534112 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="config-reloader" containerID="cri-o://5833adb0117a8d41a669b51e672fa4471dd8e152778ebc0db32735d286328549" gracePeriod=600 Jan 21 11:18:35 crc kubenswrapper[4881]: I0121 11:18:35.534184 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="prometheus" containerID="cri-o://a56efe39870006b796c3201c8dc3334fb4d25c094ef7e6facbf2f393bd54653c" gracePeriod=600 Jan 21 11:18:35 crc kubenswrapper[4881]: E0121 11:18:35.725567 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75733567_f2a6_4331_bdea_147126213437.slice/crio-conmon-2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75733567_f2a6_4331_bdea_147126213437.slice/crio-2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498124 4881 generic.go:334] "Generic (PLEG): container finished" podID="75733567-f2a6-4331-bdea-147126213437" containerID="2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986" exitCode=0 Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498516 4881 generic.go:334] "Generic (PLEG): container finished" podID="75733567-f2a6-4331-bdea-147126213437" containerID="5833adb0117a8d41a669b51e672fa4471dd8e152778ebc0db32735d286328549" exitCode=0 Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498524 4881 generic.go:334] "Generic (PLEG): container finished" podID="75733567-f2a6-4331-bdea-147126213437" containerID="a56efe39870006b796c3201c8dc3334fb4d25c094ef7e6facbf2f393bd54653c" exitCode=0 Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498584 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerDied","Data":"2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498612 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerDied","Data":"5833adb0117a8d41a669b51e672fa4471dd8e152778ebc0db32735d286328549"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498623 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerDied","Data":"a56efe39870006b796c3201c8dc3334fb4d25c094ef7e6facbf2f393bd54653c"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.514008 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"b2b3898e8cf9e67719df1cbd7d9730c00502e2beb2d6aabf2368adaabab0bde5"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.514057 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"a455f293414cf9854db8ac764207fddf18e9d7fdd01199943100a6d3d797481d"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.514066 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"30ebbc3da58097752ab30be268597fcf58310323ce02fb38797ea939848af428"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.725089 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.733489 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2vkg\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.736692 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.737747 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.738365 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.737778 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.738962 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.738999 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.739548 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.739016 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.744735 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.744759 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.747763 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg" (OuterVolumeSpecName: "kube-api-access-n2vkg") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "kube-api-access-n2vkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.756548 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config" (OuterVolumeSpecName: "config") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.849767 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.849908 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.849943 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.850698 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2vkg\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.850725 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.851610 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.860426 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.861229 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.863924 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config" (OuterVolumeSpecName: "web-config") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.871682 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out" (OuterVolumeSpecName: "config-out") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.884270 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.952931 4881 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.953012 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") on node \"crc\" " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.953028 4881 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.953040 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.953051 4881 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.953060 4881 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.984212 4881 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.984464 4881 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a") on node "crc" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.054456 4881 reconciler_common.go:293] "Volume detached for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.649908 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"289fc976662972c742902d4838622aa28afe05c468c3ba1562bd132609c2c02d"} Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.671651 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerDied","Data":"648f9884533415a5c2309f4dd9efc2ccd6cbaeb098dca1475cdb0221de466d52"} Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.671753 4881 scope.go:117] "RemoveContainer" containerID="2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.672100 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.703528 4881 scope.go:117] "RemoveContainer" containerID="5833adb0117a8d41a669b51e672fa4471dd8e152778ebc0db32735d286328549" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.724224 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.733001 4881 scope.go:117] "RemoveContainer" containerID="a56efe39870006b796c3201c8dc3334fb4d25c094ef7e6facbf2f393bd54653c" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.739188 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.769625 4881 scope.go:117] "RemoveContainer" containerID="3d2c36495c41eb6152a1fc9a05412fce52a5f353e0b59004227d5efed6039fb6" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.777951 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778336 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="config-reloader" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778357 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="config-reloader" Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778372 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="thanos-sidecar" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778379 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="thanos-sidecar" Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778395 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a2b37a-049a-45a1-aeb5-6b7d5515dd69" containerName="mariadb-account-create-update" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778401 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a2b37a-049a-45a1-aeb5-6b7d5515dd69" containerName="mariadb-account-create-update" Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778418 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="prometheus" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778424 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="prometheus" Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778435 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="init-config-reloader" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778441 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="init-config-reloader" Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778450 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27451133-57c8-4991-aae0-ec0a82432176" containerName="swift-ring-rebalance" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778457 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="27451133-57c8-4991-aae0-ec0a82432176" containerName="swift-ring-rebalance" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.785277 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a2b37a-049a-45a1-aeb5-6b7d5515dd69" containerName="mariadb-account-create-update" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.785335 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="prometheus" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.785349 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="config-reloader" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.785365 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="27451133-57c8-4991-aae0-ec0a82432176" containerName="swift-ring-rebalance" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.785373 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="thanos-sidecar" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.805297 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.811187 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.812551 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.813168 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.814060 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.813655 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.815015 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.824976 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jwvdx" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.839163 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.863538 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.902603 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004231 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004343 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004373 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004389 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004414 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ng7\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004436 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004471 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004519 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004549 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004584 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004622 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004641 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004659 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.106797 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107241 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107286 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107327 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107457 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107479 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107507 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107557 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9ng7\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107583 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107708 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107815 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107849 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.109367 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.110016 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.110127 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.115683 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.115731 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c91253029fdcc57c7bcc13c4ee1dc503079fe71761fa62e5d04837e0b8b075e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.119407 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.129169 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.131655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.132922 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.135703 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.136770 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.137647 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.142093 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.144126 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9ng7\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.233454 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.510585 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.814126 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"7fea739fffe156a19d69d7b51628d39a5e4c2419e42dcdc81465b1fd6fd1e3e1"} Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.162305 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:39 crc kubenswrapper[4881]: W0121 11:18:39.179069 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5ae3126_d6d3_4268_8e35_e216eabcc6f4.slice/crio-044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c WatchSource:0}: Error finding container 044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c: Status 404 returned error can't find the container with id 044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.327301 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75733567-f2a6-4331-bdea-147126213437" path="/var/lib/kubelet/pods/75733567-f2a6-4331-bdea-147126213437/volumes" Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.626644 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.831484 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"3ddd9b68c26af9e4e85ec9549e5f6dce7d1eb4439d142a49985d4929d3f28693"} Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.832707 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerStarted","Data":"044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c"} Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.952609 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.502813543 podStartE2EDuration="45.952588942s" podCreationTimestamp="2026-01-21 11:17:54 +0000 UTC" firstStartedPulling="2026-01-21 11:18:29.384698183 +0000 UTC m=+1296.644654642" lastFinishedPulling="2026-01-21 11:18:34.834473572 +0000 UTC m=+1302.094430041" observedRunningTime="2026-01-21 11:18:39.949393733 +0000 UTC m=+1307.209350212" watchObservedRunningTime="2026-01-21 11:18:39.952588942 +0000 UTC m=+1307.212545411" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.191327 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.275838 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.277685 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.279954 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.296058 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366321 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366541 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366604 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366648 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gqq\" (UniqueName: \"kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366688 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366717 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468356 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468498 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468571 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468628 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4gqq\" (UniqueName: \"kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468752 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468799 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.469558 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.469573 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.469682 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.469688 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.469923 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.477628 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.498615 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4gqq\" (UniqueName: \"kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.586947 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="44bcf219-3358-4596-9d1e-88a51c415266" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.603329 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:41 crc kubenswrapper[4881]: I0121 11:18:41.130423 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:18:41 crc kubenswrapper[4881]: W0121 11:18:41.138995 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode51b074c_ae44_4db9_9ce6_b656a961dfaf.slice/crio-485dc8c96eb7030a8e95c465abb23eb90b718f53333b55d575fff9445925584c WatchSource:0}: Error finding container 485dc8c96eb7030a8e95c465abb23eb90b718f53333b55d575fff9445925584c: Status 404 returned error can't find the container with id 485dc8c96eb7030a8e95c465abb23eb90b718f53333b55d575fff9445925584c Jan 21 11:18:42 crc kubenswrapper[4881]: I0121 11:18:42.118082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerStarted","Data":"596eab5e695f6c4af1ee0501f1a922c8b4ac8e567cedab5865035324bb33f0cb"} Jan 21 11:18:42 crc kubenswrapper[4881]: I0121 11:18:42.118542 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerStarted","Data":"485dc8c96eb7030a8e95c465abb23eb90b718f53333b55d575fff9445925584c"} Jan 21 11:18:43 crc kubenswrapper[4881]: I0121 11:18:43.130681 4881 generic.go:334] "Generic (PLEG): container finished" podID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerID="596eab5e695f6c4af1ee0501f1a922c8b4ac8e567cedab5865035324bb33f0cb" exitCode=0 Jan 21 11:18:43 crc kubenswrapper[4881]: I0121 11:18:43.130882 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerDied","Data":"596eab5e695f6c4af1ee0501f1a922c8b4ac8e567cedab5865035324bb33f0cb"} Jan 21 11:18:44 crc kubenswrapper[4881]: I0121 11:18:44.179971 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerStarted","Data":"a35359d5b5faf07c0a8496b05737dc67dd3207c714c5cd8b7b98eda3d6b21eb4"} Jan 21 11:18:44 crc kubenswrapper[4881]: I0121 11:18:44.184770 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerStarted","Data":"942d5c3de6fa62e5024b8e526fb126bf73a64902207ddcb2a51d04aa20661a8c"} Jan 21 11:18:44 crc kubenswrapper[4881]: I0121 11:18:44.184900 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:50 crc kubenswrapper[4881]: I0121 11:18:50.192026 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:18:50 crc kubenswrapper[4881]: I0121 11:18:50.229599 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podStartSLOduration=10.229574974 podStartE2EDuration="10.229574974s" podCreationTimestamp="2026-01-21 11:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:44.253899689 +0000 UTC m=+1311.513856178" watchObservedRunningTime="2026-01-21 11:18:50.229574974 +0000 UTC m=+1317.489531443" Jan 21 11:18:50 crc kubenswrapper[4881]: I0121 11:18:50.481052 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 11:18:50 crc kubenswrapper[4881]: I0121 11:18:50.591992 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:18:50 crc kubenswrapper[4881]: I0121 11:18:50.605013 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:51 crc kubenswrapper[4881]: I0121 11:18:51.158578 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:18:51 crc kubenswrapper[4881]: I0121 11:18:51.158818 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="dnsmasq-dns" containerID="cri-o://a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73" gracePeriod=10 Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.311257 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.491897 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc\") pod \"62435f30-e8fc-4fcd-8b96-4a604439965e\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.492101 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb\") pod \"62435f30-e8fc-4fcd-8b96-4a604439965e\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.492158 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config\") pod \"62435f30-e8fc-4fcd-8b96-4a604439965e\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.492338 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45wlj\" (UniqueName: \"kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj\") pod \"62435f30-e8fc-4fcd-8b96-4a604439965e\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.493318 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb\") pod \"62435f30-e8fc-4fcd-8b96-4a604439965e\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.494075 4881 generic.go:334] "Generic (PLEG): container finished" podID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerID="a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73" exitCode=0 Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.494189 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" event={"ID":"62435f30-e8fc-4fcd-8b96-4a604439965e","Type":"ContainerDied","Data":"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73"} Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.494241 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" event={"ID":"62435f30-e8fc-4fcd-8b96-4a604439965e","Type":"ContainerDied","Data":"44f80926337efad13c65101fd501f43ed3467cedbf9bc0293c7241abb38a34e2"} Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.494262 4881 scope.go:117] "RemoveContainer" containerID="a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.494453 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.515859 4881 generic.go:334] "Generic (PLEG): container finished" podID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerID="a35359d5b5faf07c0a8496b05737dc67dd3207c714c5cd8b7b98eda3d6b21eb4" exitCode=0 Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.515901 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerDied","Data":"a35359d5b5faf07c0a8496b05737dc67dd3207c714c5cd8b7b98eda3d6b21eb4"} Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.519872 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj" (OuterVolumeSpecName: "kube-api-access-45wlj") pod "62435f30-e8fc-4fcd-8b96-4a604439965e" (UID: "62435f30-e8fc-4fcd-8b96-4a604439965e"). InnerVolumeSpecName "kube-api-access-45wlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.599681 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45wlj\" (UniqueName: \"kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.641039 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "62435f30-e8fc-4fcd-8b96-4a604439965e" (UID: "62435f30-e8fc-4fcd-8b96-4a604439965e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.680660 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "62435f30-e8fc-4fcd-8b96-4a604439965e" (UID: "62435f30-e8fc-4fcd-8b96-4a604439965e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.700260 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config" (OuterVolumeSpecName: "config") pod "62435f30-e8fc-4fcd-8b96-4a604439965e" (UID: "62435f30-e8fc-4fcd-8b96-4a604439965e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.708571 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.712445 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.713130 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.718475 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62435f30-e8fc-4fcd-8b96-4a604439965e" (UID: "62435f30-e8fc-4fcd-8b96-4a604439965e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.788434 4881 scope.go:117] "RemoveContainer" containerID="f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.815349 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.872189 4881 scope.go:117] "RemoveContainer" containerID="a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73" Jan 21 11:18:52 crc kubenswrapper[4881]: E0121 11:18:52.885105 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73\": container with ID starting with a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73 not found: ID does not exist" containerID="a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.885215 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73"} err="failed to get container status \"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73\": rpc error: code = NotFound desc = could not find container \"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73\": container with ID starting with a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73 not found: ID does not exist" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.885267 4881 scope.go:117] "RemoveContainer" containerID="f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06" Jan 21 11:18:52 crc kubenswrapper[4881]: E0121 11:18:52.892045 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06\": container with ID starting with f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06 not found: ID does not exist" containerID="f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.892137 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06"} err="failed to get container status \"f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06\": rpc error: code = NotFound desc = could not find container \"f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06\": container with ID starting with f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06 not found: ID does not exist" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.893798 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.915693 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:18:53 crc kubenswrapper[4881]: I0121 11:18:53.064471 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-t4mx7"] Jan 21 11:18:53 crc kubenswrapper[4881]: E0121 11:18:53.065363 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="dnsmasq-dns" Jan 21 11:18:53 crc kubenswrapper[4881]: I0121 11:18:53.065386 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="dnsmasq-dns" Jan 21 11:18:55 crc kubenswrapper[4881]: E0121 11:18:53.066137 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="init" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.066168 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="init" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.066627 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="dnsmasq-dns" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.068763 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.081086 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-vlkhp" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.081405 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.105448 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-t4mx7"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.186484 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ktp2w"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.188181 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.202254 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ktp2w"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.252976 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.253028 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.253076 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd8cs\" (UniqueName: \"kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.253141 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.298546 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-r9r4z"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.303529 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.331024 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" path="/var/lib/kubelet/pods/62435f30-e8fc-4fcd-8b96-4a604439965e/volumes" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.355812 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.355872 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5g6p\" (UniqueName: \"kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.355905 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.355940 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd8cs\" (UniqueName: \"kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.355977 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.356014 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.359507 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r9r4z"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.366806 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.367535 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.368232 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.377474 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a5aa-account-create-update-j2nc8"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.379657 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.389300 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.398290 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a5aa-account-create-update-j2nc8"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.409589 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd8cs\" (UniqueName: \"kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.417868 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.458775 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5g6p\" (UniqueName: \"kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.458873 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.458944 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.459017 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfm9d\" (UniqueName: \"kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.460913 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.513657 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5g6p\" (UniqueName: \"kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.560260 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-44pdb"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.562156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.567531 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.567647 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6lx\" (UniqueName: \"kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.567755 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.567843 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfm9d\" (UniqueName: \"kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.569297 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.580756 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-44pdb"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.584031 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.584344 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.584640 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.591315 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j54nk" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.629589 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfm9d\" (UniqueName: \"kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.669560 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.669621 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm6lx\" (UniqueName: \"kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.669668 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.669695 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.669821 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fnp\" (UniqueName: \"kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.674198 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.675043 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.718748 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm6lx\" (UniqueName: \"kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.786106 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.790330 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.790430 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.790566 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5fnp\" (UniqueName: \"kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.798890 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.803802 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.808206 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.816948 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.821875 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.173687 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.230648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5fnp\" (UniqueName: \"kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.520256 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j54nk" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.528531 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.562860 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c7b7-account-create-update-dcz9r"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.564353 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.576288 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.689357 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c7b7-account-create-update-dcz9r"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.713482 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.713573 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd8bn\" (UniqueName: \"kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.779610 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerStarted","Data":"8325ef681bcdbc9f213b1b50d5070cda09f322843e0e7d334a000739ac240fa4"} Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.814996 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd8bn\" (UniqueName: \"kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.815206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.819086 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.882983 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd8bn\" (UniqueName: \"kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.133832 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.871028 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-82x9l"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.874132 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-82x9l" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.894223 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-82x9l"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.942601 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4f4\" (UniqueName: \"kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.942745 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.972604 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3649-account-create-update-pqj5m"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.974486 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.978520 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.995200 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3649-account-create-update-pqj5m"] Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.047625 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j4f4\" (UniqueName: \"kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.049234 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.049380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxqzm\" (UniqueName: \"kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.049590 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.051260 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.097473 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j4f4\" (UniqueName: \"kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.151419 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.151468 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxqzm\" (UniqueName: \"kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.152679 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.203464 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-82x9l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.219441 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-b544m"] Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.221399 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.246037 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-170f-account-create-update-8bt4l"] Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.247889 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.250420 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxqzm\" (UniqueName: \"kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.259634 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.270289 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b544m"] Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.285379 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-170f-account-create-update-8bt4l"] Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.307483 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.658217 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmx6h\" (UniqueName: \"kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.658293 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.658336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.658428 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5l8\" (UniqueName: \"kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.759852 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft5l8\" (UniqueName: \"kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.760337 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmx6h\" (UniqueName: \"kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.760376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.760421 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.761309 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.762008 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.799672 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft5l8\" (UniqueName: \"kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.807668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmx6h\" (UniqueName: \"kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.948623 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.998093 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.097899 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-44pdb"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.155814 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.157389 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r9r4z"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.181080 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a5aa-account-create-update-j2nc8"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.197866 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c7b7-account-create-update-dcz9r"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.287105 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ktp2w"] Jan 21 11:18:57 crc kubenswrapper[4881]: W0121 11:18:57.318382 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7e598c_b449_4e8c_9214_44e27cb45e53.slice/crio-7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd WatchSource:0}: Error finding container 7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd: Status 404 returned error can't find the container with id 7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.346568 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-t4mx7"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.347543 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-82x9l"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.734551 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3649-account-create-update-pqj5m"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.829193 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44pdb" event={"ID":"34efcb76-01fb-490b-88c0-a4ee1363a01e","Type":"ContainerStarted","Data":"6dc4d522c502820b83234d2fee061b7bda412d486d52242e7e816991b3acbb57"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.837703 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-82x9l" event={"ID":"b4b2b4e9-304c-47ae-939a-9d938d012b90","Type":"ContainerStarted","Data":"2a3219b4170b52910ee3ec4f3e718c26c9394c8de6c94a328647a77455eecee7"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.847280 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r9r4z" event={"ID":"c8cfe009-eba2-4713-b50f-cc334b4ca691","Type":"ContainerStarted","Data":"f5cc4525f4f901e33752ba6e7b8772cae9da70d02d9ba133272b4a6ad13119ce"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.849236 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t4mx7" event={"ID":"bc7e598c-b449-4e8c-9214-44e27cb45e53","Type":"ContainerStarted","Data":"7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.851309 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c7b7-account-create-update-dcz9r" event={"ID":"0145b8f9-5452-4f0e-819c-61fbb8badffb","Type":"ContainerStarted","Data":"bcc90bc5bb0ac66c01f3db31717a3508d38e85e90ddae059cb25369a981558ec"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.853906 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-170f-account-create-update-8bt4l"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.856528 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a5aa-account-create-update-j2nc8" event={"ID":"ec3ba10e-2cbd-4350-9014-27a92932849f","Type":"ContainerStarted","Data":"b23cd46acdcd43f425c2a5437146050ee4518de5ebe4b06308893c922580bb1d"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.859776 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ktp2w" event={"ID":"5d72ab14-b1c2-4382-847a-00eb254ac958","Type":"ContainerStarted","Data":"72a097cf59195f6eec304ff661d8ae56f590c3e6389aa564783cb080dd6a3c8c"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.913663 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b544m"] Jan 21 11:18:57 crc kubenswrapper[4881]: W0121 11:18:57.990966 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc837cab9_43a5_4b84_a0bd_d915bca31600.slice/crio-7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088 WatchSource:0}: Error finding container 7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088: Status 404 returned error can't find the container with id 7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088 Jan 21 11:18:58 crc kubenswrapper[4881]: W0121 11:18:58.024312 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod760e8dbf_d827_42ef_969c_1c7409f7ac20.slice/crio-c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d WatchSource:0}: Error finding container c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d: Status 404 returned error can't find the container with id c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d Jan 21 11:18:59 crc kubenswrapper[4881]: I0121 11:18:59.116577 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-170f-account-create-update-8bt4l" event={"ID":"c837cab9-43a5-4b84-a0bd-d915bca31600","Type":"ContainerStarted","Data":"7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088"} Jan 21 11:18:59 crc kubenswrapper[4881]: I0121 11:18:59.125183 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3649-account-create-update-pqj5m" event={"ID":"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4","Type":"ContainerStarted","Data":"0d6a8467ce12e79fc1ea582199a39bbf54288de22059797959d06afa76924361"} Jan 21 11:18:59 crc kubenswrapper[4881]: I0121 11:18:59.128397 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b544m" event={"ID":"760e8dbf-d827-42ef-969c-1c7409f7ac20","Type":"ContainerStarted","Data":"c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.158839 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ktp2w" event={"ID":"5d72ab14-b1c2-4382-847a-00eb254ac958","Type":"ContainerStarted","Data":"9183c1ea9a3472251b9a9872ac196a0371d8a3a960cf0876e3244bf2dc5fc313"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.164489 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-82x9l" event={"ID":"b4b2b4e9-304c-47ae-939a-9d938d012b90","Type":"ContainerStarted","Data":"842c407700548966028d06c2f685224af9199aeb260a3fcbe49b13c5d2308449"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.167586 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r9r4z" event={"ID":"c8cfe009-eba2-4713-b50f-cc334b4ca691","Type":"ContainerStarted","Data":"8fede96a0f0891ea2a0beeea55c81b92d1d136a372295efbbbb9fb60c32a400b"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.169847 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-170f-account-create-update-8bt4l" event={"ID":"c837cab9-43a5-4b84-a0bd-d915bca31600","Type":"ContainerStarted","Data":"475d11a1d0ffe3143569c01c096587097abd1f5b648c8d0d1064b5b35157b3c4"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.173157 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3649-account-create-update-pqj5m" event={"ID":"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4","Type":"ContainerStarted","Data":"23d18cc60c7d47249b61d06b5e22cae5297e1e798a824f42c26b13569f6185c2"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.174884 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b544m" event={"ID":"760e8dbf-d827-42ef-969c-1c7409f7ac20","Type":"ContainerStarted","Data":"4830c420695532fe361ac3eb65ee53d659da36dd7a4d7c07a18532e51115b820"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.177551 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c7b7-account-create-update-dcz9r" event={"ID":"0145b8f9-5452-4f0e-819c-61fbb8badffb","Type":"ContainerStarted","Data":"19837216e672b1d70dcee3db6a9cc2dfe6a6a6ac2f0ef6c6a1c9729e5d023d0f"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.184993 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a5aa-account-create-update-j2nc8" event={"ID":"ec3ba10e-2cbd-4350-9014-27a92932849f","Type":"ContainerStarted","Data":"68b28d1f90d946399d23686118aca2c39b038f12760a90f94c3980be0fdb6b45"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.192656 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerStarted","Data":"ef9d78c9c5e22c01f5e8274cad9637d465377b5339dc20fcbf444a1190841bcb"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.201548 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c7b7-account-create-update-dcz9r" podStartSLOduration=7.201514433 podStartE2EDuration="7.201514433s" podCreationTimestamp="2026-01-21 11:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:00.197344909 +0000 UTC m=+1327.457301378" watchObservedRunningTime="2026-01-21 11:19:00.201514433 +0000 UTC m=+1327.461470912" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.437616 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-3649-account-create-update-pqj5m" podStartSLOduration=7.437586694 podStartE2EDuration="7.437586694s" podCreationTimestamp="2026-01-21 11:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.425687926 +0000 UTC m=+1329.685644405" watchObservedRunningTime="2026-01-21 11:19:02.437586694 +0000 UTC m=+1329.697543163" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.454325 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-a5aa-account-create-update-j2nc8" podStartSLOduration=9.454291902 podStartE2EDuration="9.454291902s" podCreationTimestamp="2026-01-21 11:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.449973234 +0000 UTC m=+1329.709929703" watchObservedRunningTime="2026-01-21 11:19:02.454291902 +0000 UTC m=+1329.714248371" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.489232 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-ktp2w" podStartSLOduration=9.489199275 podStartE2EDuration="9.489199275s" podCreationTimestamp="2026-01-21 11:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.48099973 +0000 UTC m=+1329.740956209" watchObservedRunningTime="2026-01-21 11:19:02.489199275 +0000 UTC m=+1329.749155744" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.507363 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-b544m" podStartSLOduration=6.507328899 podStartE2EDuration="6.507328899s" podCreationTimestamp="2026-01-21 11:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.503012891 +0000 UTC m=+1329.762969380" watchObservedRunningTime="2026-01-21 11:19:02.507328899 +0000 UTC m=+1329.767285368" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.527104 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-170f-account-create-update-8bt4l" podStartSLOduration=6.527067822 podStartE2EDuration="6.527067822s" podCreationTimestamp="2026-01-21 11:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.525875932 +0000 UTC m=+1329.785832401" watchObservedRunningTime="2026-01-21 11:19:02.527067822 +0000 UTC m=+1329.787024291" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.559924 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-82x9l" podStartSLOduration=7.559901394 podStartE2EDuration="7.559901394s" podCreationTimestamp="2026-01-21 11:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.548562771 +0000 UTC m=+1329.808519230" watchObservedRunningTime="2026-01-21 11:19:02.559901394 +0000 UTC m=+1329.819857863" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.569509 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-r9r4z" podStartSLOduration=9.569490553 podStartE2EDuration="9.569490553s" podCreationTimestamp="2026-01-21 11:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.567515494 +0000 UTC m=+1329.827471963" watchObservedRunningTime="2026-01-21 11:19:02.569490553 +0000 UTC m=+1329.829447032" Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.401773 4881 generic.go:334] "Generic (PLEG): container finished" podID="c837cab9-43a5-4b84-a0bd-d915bca31600" containerID="475d11a1d0ffe3143569c01c096587097abd1f5b648c8d0d1064b5b35157b3c4" exitCode=0 Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.401866 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-170f-account-create-update-8bt4l" event={"ID":"c837cab9-43a5-4b84-a0bd-d915bca31600","Type":"ContainerDied","Data":"475d11a1d0ffe3143569c01c096587097abd1f5b648c8d0d1064b5b35157b3c4"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.406551 4881 generic.go:334] "Generic (PLEG): container finished" podID="5d72ab14-b1c2-4382-847a-00eb254ac958" containerID="9183c1ea9a3472251b9a9872ac196a0371d8a3a960cf0876e3244bf2dc5fc313" exitCode=0 Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.406595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ktp2w" event={"ID":"5d72ab14-b1c2-4382-847a-00eb254ac958","Type":"ContainerDied","Data":"9183c1ea9a3472251b9a9872ac196a0371d8a3a960cf0876e3244bf2dc5fc313"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.409565 4881 generic.go:334] "Generic (PLEG): container finished" podID="760e8dbf-d827-42ef-969c-1c7409f7ac20" containerID="4830c420695532fe361ac3eb65ee53d659da36dd7a4d7c07a18532e51115b820" exitCode=0 Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.409601 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b544m" event={"ID":"760e8dbf-d827-42ef-969c-1c7409f7ac20","Type":"ContainerDied","Data":"4830c420695532fe361ac3eb65ee53d659da36dd7a4d7c07a18532e51115b820"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.426296 4881 generic.go:334] "Generic (PLEG): container finished" podID="b4b2b4e9-304c-47ae-939a-9d938d012b90" containerID="842c407700548966028d06c2f685224af9199aeb260a3fcbe49b13c5d2308449" exitCode=0 Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.426425 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-82x9l" event={"ID":"b4b2b4e9-304c-47ae-939a-9d938d012b90","Type":"ContainerDied","Data":"842c407700548966028d06c2f685224af9199aeb260a3fcbe49b13c5d2308449"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.435468 4881 generic.go:334] "Generic (PLEG): container finished" podID="c8cfe009-eba2-4713-b50f-cc334b4ca691" containerID="8fede96a0f0891ea2a0beeea55c81b92d1d136a372295efbbbb9fb60c32a400b" exitCode=0 Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.435645 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r9r4z" event={"ID":"c8cfe009-eba2-4713-b50f-cc334b4ca691","Type":"ContainerDied","Data":"8fede96a0f0891ea2a0beeea55c81b92d1d136a372295efbbbb9fb60c32a400b"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.451150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerStarted","Data":"c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.511333 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.535093 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.53506426 podStartE2EDuration="26.53506426s" podCreationTimestamp="2026-01-21 11:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:03.534003293 +0000 UTC m=+1330.793959772" watchObservedRunningTime="2026-01-21 11:19:03.53506426 +0000 UTC m=+1330.795020729" Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.966470 4881 generic.go:334] "Generic (PLEG): container finished" podID="6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" containerID="23d18cc60c7d47249b61d06b5e22cae5297e1e798a824f42c26b13569f6185c2" exitCode=0 Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.966689 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3649-account-create-update-pqj5m" event={"ID":"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4","Type":"ContainerDied","Data":"23d18cc60c7d47249b61d06b5e22cae5297e1e798a824f42c26b13569f6185c2"} Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.972982 4881 generic.go:334] "Generic (PLEG): container finished" podID="0145b8f9-5452-4f0e-819c-61fbb8badffb" containerID="19837216e672b1d70dcee3db6a9cc2dfe6a6a6ac2f0ef6c6a1c9729e5d023d0f" exitCode=0 Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.973069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c7b7-account-create-update-dcz9r" event={"ID":"0145b8f9-5452-4f0e-819c-61fbb8badffb","Type":"ContainerDied","Data":"19837216e672b1d70dcee3db6a9cc2dfe6a6a6ac2f0ef6c6a1c9729e5d023d0f"} Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.976842 4881 generic.go:334] "Generic (PLEG): container finished" podID="ec3ba10e-2cbd-4350-9014-27a92932849f" containerID="68b28d1f90d946399d23686118aca2c39b038f12760a90f94c3980be0fdb6b45" exitCode=0 Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.977035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a5aa-account-create-update-j2nc8" event={"ID":"ec3ba10e-2cbd-4350-9014-27a92932849f","Type":"ContainerDied","Data":"68b28d1f90d946399d23686118aca2c39b038f12760a90f94c3980be0fdb6b45"} Jan 21 11:19:08 crc kubenswrapper[4881]: I0121 11:19:08.511986 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 11:19:08 crc kubenswrapper[4881]: I0121 11:19:08.523440 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.021300 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.224840 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b544m" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.231857 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.238217 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r9r4z" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.319595 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts\") pod \"c837cab9-43a5-4b84-a0bd-d915bca31600\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.319685 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts\") pod \"c8cfe009-eba2-4713-b50f-cc334b4ca691\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.319952 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts\") pod \"760e8dbf-d827-42ef-969c-1c7409f7ac20\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.320022 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmx6h\" (UniqueName: \"kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h\") pod \"c837cab9-43a5-4b84-a0bd-d915bca31600\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.320072 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft5l8\" (UniqueName: \"kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8\") pod \"760e8dbf-d827-42ef-969c-1c7409f7ac20\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.320103 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfm9d\" (UniqueName: \"kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d\") pod \"c8cfe009-eba2-4713-b50f-cc334b4ca691\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.320867 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "760e8dbf-d827-42ef-969c-1c7409f7ac20" (UID: "760e8dbf-d827-42ef-969c-1c7409f7ac20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.321321 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c837cab9-43a5-4b84-a0bd-d915bca31600" (UID: "c837cab9-43a5-4b84-a0bd-d915bca31600"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.321670 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8cfe009-eba2-4713-b50f-cc334b4ca691" (UID: "c8cfe009-eba2-4713-b50f-cc334b4ca691"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.336175 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8" (OuterVolumeSpecName: "kube-api-access-ft5l8") pod "760e8dbf-d827-42ef-969c-1c7409f7ac20" (UID: "760e8dbf-d827-42ef-969c-1c7409f7ac20"). InnerVolumeSpecName "kube-api-access-ft5l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.346429 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h" (OuterVolumeSpecName: "kube-api-access-gmx6h") pod "c837cab9-43a5-4b84-a0bd-d915bca31600" (UID: "c837cab9-43a5-4b84-a0bd-d915bca31600"). InnerVolumeSpecName "kube-api-access-gmx6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.347107 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d" (OuterVolumeSpecName: "kube-api-access-qfm9d") pod "c8cfe009-eba2-4713-b50f-cc334b4ca691" (UID: "c8cfe009-eba2-4713-b50f-cc334b4ca691"). InnerVolumeSpecName "kube-api-access-qfm9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423560 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423645 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423659 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmx6h\" (UniqueName: \"kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423672 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft5l8\" (UniqueName: \"kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423686 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfm9d\" (UniqueName: \"kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423700 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.029699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-170f-account-create-update-8bt4l" event={"ID":"c837cab9-43a5-4b84-a0bd-d915bca31600","Type":"ContainerDied","Data":"7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088"} Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.029812 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.029911 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.033167 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b544m" event={"ID":"760e8dbf-d827-42ef-969c-1c7409f7ac20","Type":"ContainerDied","Data":"c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d"} Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.033219 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.033312 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b544m" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.038843 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r9r4z" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.039180 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r9r4z" event={"ID":"c8cfe009-eba2-4713-b50f-cc334b4ca691","Type":"ContainerDied","Data":"f5cc4525f4f901e33752ba6e7b8772cae9da70d02d9ba133272b4a6ad13119ce"} Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.039251 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5cc4525f4f901e33752ba6e7b8772cae9da70d02d9ba133272b4a6ad13119ce" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.073462 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-82x9l" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.081911 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.124206 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ktp2w" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.130772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a5aa-account-create-update-j2nc8" event={"ID":"ec3ba10e-2cbd-4350-9014-27a92932849f","Type":"ContainerDied","Data":"b23cd46acdcd43f425c2a5437146050ee4518de5ebe4b06308893c922580bb1d"} Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.130844 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b23cd46acdcd43f425c2a5437146050ee4518de5ebe4b06308893c922580bb1d" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.140294 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3649-account-create-update-pqj5m" event={"ID":"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4","Type":"ContainerDied","Data":"0d6a8467ce12e79fc1ea582199a39bbf54288de22059797959d06afa76924361"} Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.140340 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6a8467ce12e79fc1ea582199a39bbf54288de22059797959d06afa76924361" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.144636 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ktp2w" event={"ID":"5d72ab14-b1c2-4382-847a-00eb254ac958","Type":"ContainerDied","Data":"72a097cf59195f6eec304ff661d8ae56f590c3e6389aa564783cb080dd6a3c8c"} Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.144674 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72a097cf59195f6eec304ff661d8ae56f590c3e6389aa564783cb080dd6a3c8c" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.144728 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ktp2w" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.145900 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.148697 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.155184 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-82x9l" event={"ID":"b4b2b4e9-304c-47ae-939a-9d938d012b90","Type":"ContainerDied","Data":"2a3219b4170b52910ee3ec4f3e718c26c9394c8de6c94a328647a77455eecee7"} Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.155249 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a3219b4170b52910ee3ec4f3e718c26c9394c8de6c94a328647a77455eecee7" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.155372 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-82x9l" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.160857 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c7b7-account-create-update-dcz9r" event={"ID":"0145b8f9-5452-4f0e-819c-61fbb8badffb","Type":"ContainerDied","Data":"bcc90bc5bb0ac66c01f3db31717a3508d38e85e90ddae059cb25369a981558ec"} Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.160902 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc90bc5bb0ac66c01f3db31717a3508d38e85e90ddae059cb25369a981558ec" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.161038 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.176145 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts\") pod \"0145b8f9-5452-4f0e-819c-61fbb8badffb\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.176347 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j4f4\" (UniqueName: \"kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4\") pod \"b4b2b4e9-304c-47ae-939a-9d938d012b90\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.176487 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd8bn\" (UniqueName: \"kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn\") pod \"0145b8f9-5452-4f0e-819c-61fbb8badffb\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.176540 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts\") pod \"b4b2b4e9-304c-47ae-939a-9d938d012b90\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.177959 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4b2b4e9-304c-47ae-939a-9d938d012b90" (UID: "b4b2b4e9-304c-47ae-939a-9d938d012b90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.178429 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0145b8f9-5452-4f0e-819c-61fbb8badffb" (UID: "0145b8f9-5452-4f0e-819c-61fbb8badffb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.190068 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4" (OuterVolumeSpecName: "kube-api-access-7j4f4") pod "b4b2b4e9-304c-47ae-939a-9d938d012b90" (UID: "b4b2b4e9-304c-47ae-939a-9d938d012b90"). InnerVolumeSpecName "kube-api-access-7j4f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.190229 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn" (OuterVolumeSpecName: "kube-api-access-wd8bn") pod "0145b8f9-5452-4f0e-819c-61fbb8badffb" (UID: "0145b8f9-5452-4f0e-819c-61fbb8badffb"). InnerVolumeSpecName "kube-api-access-wd8bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.278944 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts\") pod \"5d72ab14-b1c2-4382-847a-00eb254ac958\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279010 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm6lx\" (UniqueName: \"kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx\") pod \"ec3ba10e-2cbd-4350-9014-27a92932849f\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279194 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts\") pod \"ec3ba10e-2cbd-4350-9014-27a92932849f\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279251 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5g6p\" (UniqueName: \"kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p\") pod \"5d72ab14-b1c2-4382-847a-00eb254ac958\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279339 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxqzm\" (UniqueName: \"kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm\") pod \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279374 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts\") pod \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279663 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d72ab14-b1c2-4382-847a-00eb254ac958" (UID: "5d72ab14-b1c2-4382-847a-00eb254ac958"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280055 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280085 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j4f4\" (UniqueName: \"kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280097 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280107 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd8bn\" (UniqueName: \"kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280117 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280244 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" (UID: "6f6f337c-95ec-448f-ab58-e7e7fe7abfd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280266 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec3ba10e-2cbd-4350-9014-27a92932849f" (UID: "ec3ba10e-2cbd-4350-9014-27a92932849f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.282702 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx" (OuterVolumeSpecName: "kube-api-access-nm6lx") pod "ec3ba10e-2cbd-4350-9014-27a92932849f" (UID: "ec3ba10e-2cbd-4350-9014-27a92932849f"). InnerVolumeSpecName "kube-api-access-nm6lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.282745 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p" (OuterVolumeSpecName: "kube-api-access-z5g6p") pod "5d72ab14-b1c2-4382-847a-00eb254ac958" (UID: "5d72ab14-b1c2-4382-847a-00eb254ac958"). InnerVolumeSpecName "kube-api-access-z5g6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.284406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm" (OuterVolumeSpecName: "kube-api-access-fxqzm") pod "6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" (UID: "6f6f337c-95ec-448f-ab58-e7e7fe7abfd4"). InnerVolumeSpecName "kube-api-access-fxqzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.382002 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxqzm\" (UniqueName: \"kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.382045 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.382057 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm6lx\" (UniqueName: \"kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.382069 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.382078 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5g6p\" (UniqueName: \"kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: E0121 11:19:15.750837 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 21 11:19:15 crc kubenswrapper[4881]: E0121 11:19:15.750968 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 21 11:19:15 crc kubenswrapper[4881]: E0121 11:19:15.751212 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gd8cs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-t4mx7_openstack(bc7e598c-b449-4e8c-9214-44e27cb45e53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:19:15 crc kubenswrapper[4881]: E0121 11:19:15.752575 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-t4mx7" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" Jan 21 11:19:16 crc kubenswrapper[4881]: I0121 11:19:16.173834 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:19:16 crc kubenswrapper[4881]: I0121 11:19:16.174973 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:19:16 crc kubenswrapper[4881]: I0121 11:19:16.173988 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44pdb" event={"ID":"34efcb76-01fb-490b-88c0-a4ee1363a01e","Type":"ContainerStarted","Data":"498906e9fbb3b564603759f2238f54ad3d7c8a3ccff8535f1f6031fd2e192fd4"} Jan 21 11:19:16 crc kubenswrapper[4881]: E0121 11:19:16.176710 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-t4mx7" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" Jan 21 11:19:16 crc kubenswrapper[4881]: I0121 11:19:16.525615 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-44pdb" podStartSLOduration=4.929812152 podStartE2EDuration="23.525596156s" podCreationTimestamp="2026-01-21 11:18:53 +0000 UTC" firstStartedPulling="2026-01-21 11:18:57.148883475 +0000 UTC m=+1324.408839944" lastFinishedPulling="2026-01-21 11:19:15.744667479 +0000 UTC m=+1343.004623948" observedRunningTime="2026-01-21 11:19:16.494647612 +0000 UTC m=+1343.754604101" watchObservedRunningTime="2026-01-21 11:19:16.525596156 +0000 UTC m=+1343.785552625" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.510137 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mxb97"] Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512670 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512696 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512710 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d72ab14-b1c2-4382-847a-00eb254ac958" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512717 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d72ab14-b1c2-4382-847a-00eb254ac958" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512727 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cfe009-eba2-4713-b50f-cc334b4ca691" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512736 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cfe009-eba2-4713-b50f-cc334b4ca691" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512749 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3ba10e-2cbd-4350-9014-27a92932849f" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512766 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3ba10e-2cbd-4350-9014-27a92932849f" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512834 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b2b4e9-304c-47ae-939a-9d938d012b90" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512842 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b2b4e9-304c-47ae-939a-9d938d012b90" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512873 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0145b8f9-5452-4f0e-819c-61fbb8badffb" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512885 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0145b8f9-5452-4f0e-819c-61fbb8badffb" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512902 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c837cab9-43a5-4b84-a0bd-d915bca31600" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512911 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c837cab9-43a5-4b84-a0bd-d915bca31600" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512922 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760e8dbf-d827-42ef-969c-1c7409f7ac20" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512928 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="760e8dbf-d827-42ef-969c-1c7409f7ac20" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513173 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0145b8f9-5452-4f0e-819c-61fbb8badffb" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513187 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513209 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b2b4e9-304c-47ae-939a-9d938d012b90" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513228 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d72ab14-b1c2-4382-847a-00eb254ac958" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513237 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="760e8dbf-d827-42ef-969c-1c7409f7ac20" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513253 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c837cab9-43a5-4b84-a0bd-d915bca31600" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513268 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3ba10e-2cbd-4350-9014-27a92932849f" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513288 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cfe009-eba2-4713-b50f-cc334b4ca691" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.514005 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.516404 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8snw" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.520139 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.539428 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mxb97"] Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.692881 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.693470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.693563 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvn9r\" (UniqueName: \"kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.693658 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.795593 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvn9r\" (UniqueName: \"kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.795726 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.795869 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.795913 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.810645 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.810687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.810773 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.814248 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvn9r\" (UniqueName: \"kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.872943 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:22 crc kubenswrapper[4881]: I0121 11:19:22.569845 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mxb97"] Jan 21 11:19:23 crc kubenswrapper[4881]: I0121 11:19:23.748026 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxb97" event={"ID":"349e8898-8b7c-414a-8357-d431c8b81bf4","Type":"ContainerStarted","Data":"cd824796b06380fe0748d0a1334aa26a3fd0a19fab70225e560d35cfb754e2b4"} Jan 21 11:19:30 crc kubenswrapper[4881]: I0121 11:19:30.830775 4881 generic.go:334] "Generic (PLEG): container finished" podID="34efcb76-01fb-490b-88c0-a4ee1363a01e" containerID="498906e9fbb3b564603759f2238f54ad3d7c8a3ccff8535f1f6031fd2e192fd4" exitCode=0 Jan 21 11:19:30 crc kubenswrapper[4881]: I0121 11:19:30.830819 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44pdb" event={"ID":"34efcb76-01fb-490b-88c0-a4ee1363a01e","Type":"ContainerDied","Data":"498906e9fbb3b564603759f2238f54ad3d7c8a3ccff8535f1f6031fd2e192fd4"} Jan 21 11:19:43 crc kubenswrapper[4881]: E0121 11:19:43.010387 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 21 11:19:43 crc kubenswrapper[4881]: E0121 11:19:43.010769 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 21 11:19:43 crc kubenswrapper[4881]: E0121 11:19:43.010921 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvn9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-mxb97_openstack(349e8898-8b7c-414a-8357-d431c8b81bf4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:19:43 crc kubenswrapper[4881]: E0121 11:19:43.012278 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-mxb97" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.174286 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44pdb" event={"ID":"34efcb76-01fb-490b-88c0-a4ee1363a01e","Type":"ContainerDied","Data":"6dc4d522c502820b83234d2fee061b7bda412d486d52242e7e816991b3acbb57"} Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.174606 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dc4d522c502820b83234d2fee061b7bda412d486d52242e7e816991b3acbb57" Jan 21 11:19:43 crc kubenswrapper[4881]: E0121 11:19:43.176414 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-mxb97" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.207591 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44pdb" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.233272 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle\") pod \"34efcb76-01fb-490b-88c0-a4ee1363a01e\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.233431 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data\") pod \"34efcb76-01fb-490b-88c0-a4ee1363a01e\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.234138 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5fnp\" (UniqueName: \"kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp\") pod \"34efcb76-01fb-490b-88c0-a4ee1363a01e\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.239512 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp" (OuterVolumeSpecName: "kube-api-access-r5fnp") pod "34efcb76-01fb-490b-88c0-a4ee1363a01e" (UID: "34efcb76-01fb-490b-88c0-a4ee1363a01e"). InnerVolumeSpecName "kube-api-access-r5fnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.273714 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34efcb76-01fb-490b-88c0-a4ee1363a01e" (UID: "34efcb76-01fb-490b-88c0-a4ee1363a01e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.300577 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data" (OuterVolumeSpecName: "config-data") pod "34efcb76-01fb-490b-88c0-a4ee1363a01e" (UID: "34efcb76-01fb-490b-88c0-a4ee1363a01e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.340222 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.340448 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.340527 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5fnp\" (UniqueName: \"kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.184970 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44pdb" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.185146 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t4mx7" event={"ID":"bc7e598c-b449-4e8c-9214-44e27cb45e53","Type":"ContainerStarted","Data":"b4ed75bebc3e4f7b35b331a2f216bede613a9086f548aa45e96cbef5724a690a"} Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.217421 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-t4mx7" podStartSLOduration=6.524329766 podStartE2EDuration="52.217391877s" podCreationTimestamp="2026-01-21 11:18:52 +0000 UTC" firstStartedPulling="2026-01-21 11:18:57.365940286 +0000 UTC m=+1324.625896755" lastFinishedPulling="2026-01-21 11:19:43.059002397 +0000 UTC m=+1370.318958866" observedRunningTime="2026-01-21 11:19:44.21514665 +0000 UTC m=+1371.475103139" watchObservedRunningTime="2026-01-21 11:19:44.217391877 +0000 UTC m=+1371.477348386" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.668761 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:44 crc kubenswrapper[4881]: E0121 11:19:44.669504 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34efcb76-01fb-490b-88c0-a4ee1363a01e" containerName="keystone-db-sync" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.669558 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="34efcb76-01fb-490b-88c0-a4ee1363a01e" containerName="keystone-db-sync" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.670031 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="34efcb76-01fb-490b-88c0-a4ee1363a01e" containerName="keystone-db-sync" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.671879 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.704284 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716664 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716730 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tks72\" (UniqueName: \"kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716871 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716903 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716938 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716990 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.729268 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wg7xs"] Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.743288 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.751994 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.752375 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.752694 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.752911 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j54nk" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.753079 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.774435 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wg7xs"] Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825501 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825567 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825750 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tks72\" (UniqueName: \"kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825852 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825939 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbj2\" (UniqueName: \"kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826004 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826102 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826139 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826178 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826217 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.827877 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.832707 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.836579 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.838183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.839041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.915010 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tks72\" (UniqueName: \"kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.928980 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.929028 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.929114 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.929163 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vbj2\" (UniqueName: \"kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.929200 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.929259 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.936007 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.939653 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.942942 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.943117 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.947116 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.974247 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vbj2\" (UniqueName: \"kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.012152 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.058636 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.072490 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.099985 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.101620 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.111767 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.112331 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.120967 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2zrv4" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.122067 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.160003 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.163397 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.176885 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.177163 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.224815 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.224926 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6lrj\" (UniqueName: \"kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225011 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225058 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225086 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225108 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225162 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225192 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225251 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225278 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225316 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj6cp\" (UniqueName: \"kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.330841 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.337734 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.337902 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.337960 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338006 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338062 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj6cp\" (UniqueName: \"kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338217 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338245 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6lrj\" (UniqueName: \"kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338357 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338439 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338459 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338485 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.344656 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.348373 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.348742 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.352618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.355005 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.356178 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.379996 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.390672 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.396713 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.398333 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.407197 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.434439 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj6cp\" (UniqueName: \"kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.439461 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-slhtz"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.440991 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.454081 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6lrj\" (UniqueName: \"kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.455281 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cl6xz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.455595 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.472578 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-slhtz"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.497843 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-t6mz2"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.499625 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.535053 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.535662 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.535950 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kj7bj" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.549744 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.549869 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dscc6\" (UniqueName: \"kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.549960 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.549988 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.550021 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.550106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7pcb\" (UniqueName: \"kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.594374 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.596260 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-t6mz2"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654038 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654109 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654145 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654234 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7pcb\" (UniqueName: \"kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654280 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654348 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dscc6\" (UniqueName: \"kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.678477 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.682503 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.683132 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.688058 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4wxvl"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.689920 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.691625 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.700004 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9r4q7" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.713718 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.735593 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4wxvl"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.743055 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.747234 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.756401 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dscc6\" (UniqueName: \"kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.771904 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7pcb\" (UniqueName: \"kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.778834 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.787411 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.787855 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltkw6\" (UniqueName: \"kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.787973 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.788148 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.790876 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.801908 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.845223 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.849047 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-kc9jz"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.850697 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900495 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900591 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900654 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltkw6\" (UniqueName: \"kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900684 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900738 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.349122 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.368295 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.368527 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dndng" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.390751 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.391473 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.422113 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.427106 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.428811 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.431506 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.485085 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltkw6\" (UniqueName: \"kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.556284 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.557031 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv7qz\" (UniqueName: \"kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.557238 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.557347 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.557441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.568904 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kc9jz"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.604303 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.606666 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.613914 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.641214 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.644026 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.649025 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.654396 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.667747 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv7qz\" (UniqueName: \"kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.668019 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.668125 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.668698 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.668833 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.668948 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gpsz\" (UniqueName: \"kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.669016 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.669097 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.669162 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.669237 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.669364 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.670511 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.682096 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.682728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.683871 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.687806 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.704889 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv7qz\" (UniqueName: \"kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.820658 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gpsz\" (UniqueName: \"kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.824985 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825151 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825213 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825436 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825494 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825539 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825574 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825614 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825668 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825701 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfn2\" (UniqueName: \"kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.827452 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.827561 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.830100 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.835983 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.846449 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.867744 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gpsz\" (UniqueName: \"kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.954274 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.954345 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.954535 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.954593 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.954634 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfn2\" (UniqueName: \"kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.955641 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.958249 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.960539 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.967006 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.992936 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfn2\" (UniqueName: \"kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.005709 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.093619 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.105042 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.347926 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wg7xs"] Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.557508 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wg7xs" event={"ID":"cc3f2556-7427-4715-a56d-bbd3d7f8422f","Type":"ContainerStarted","Data":"255feaa412fc0f66dab19086ce14a7162b45237578665b2935e062ce5998cebf"} Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.593439 4881 generic.go:334] "Generic (PLEG): container finished" podID="386c2ea0-a9e4-490b-b83d-9106af06cd60" containerID="0b3499279e821abc9972417aed3d7ac5e0fad614ad777b7fffe9719ed70fc705" exitCode=0 Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.593520 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" event={"ID":"386c2ea0-a9e4-490b-b83d-9106af06cd60","Type":"ContainerDied","Data":"0b3499279e821abc9972417aed3d7ac5e0fad614ad777b7fffe9719ed70fc705"} Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.593556 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" event={"ID":"386c2ea0-a9e4-490b-b83d-9106af06cd60","Type":"ContainerStarted","Data":"09b9ddb4df44086c306b5d7a672d610bbf5c91e71fa1fc554515dc374c5b9ffb"} Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.847907 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:19:47 crc kubenswrapper[4881]: W0121 11:19:47.858055 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcec3c24_87bd_4c22_a800_d3835455a38b.slice/crio-254ee6473012064881c3b931949d5889b646c256080246e608ecc4945a005f58 WatchSource:0}: Error finding container 254ee6473012064881c3b931949d5889b646c256080246e608ecc4945a005f58: Status 404 returned error can't find the container with id 254ee6473012064881c3b931949d5889b646c256080246e608ecc4945a005f58 Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.861884 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.879893 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-t6mz2"] Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.907929 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-slhtz"] Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.945388 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:19:47 crc kubenswrapper[4881]: W0121 11:19:47.952779 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd96c79b7_58c4_4bcc_9e56_02f2a8860764.slice/crio-af85a7051ff9ab4c70d7145be172f02be844f0b1a0972620051139b6c311b772 WatchSource:0}: Error finding container af85a7051ff9ab4c70d7145be172f02be844f0b1a0972620051139b6c311b772: Status 404 returned error can't find the container with id af85a7051ff9ab4c70d7145be172f02be844f0b1a0972620051139b6c311b772 Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.059519 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4wxvl"] Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.398805 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.640077 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kc9jz"] Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.654384 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.725148 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wg7xs" event={"ID":"cc3f2556-7427-4715-a56d-bbd3d7f8422f","Type":"ContainerStarted","Data":"20252506bf2921633b620e12ae73d258d135c6a818c92bcf4d604ddbc1f5e46d"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.726313 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.773343 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wxvl" event={"ID":"65250dcf-0f0f-4fa6-8d57-e07d3d29f290","Type":"ContainerStarted","Data":"fcbe801cf2c7f3f9ce63291d49a4353e90c810cdaa5f27e1d6112dedee1eae63"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.789385 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-slhtz" event={"ID":"4bf52889-d5f3-44f8-b657-8ff3790962d1","Type":"ContainerStarted","Data":"370f02f399b03911d8ee654e46609c08288e0d57caf3655dba13b0b2e545df19"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.807473 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerStarted","Data":"254ee6473012064881c3b931949d5889b646c256080246e608ecc4945a005f58"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.832214 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77fb486557-zjtxw" event={"ID":"d96c79b7-58c4-4bcc-9e56-02f2a8860764","Type":"ContainerStarted","Data":"af85a7051ff9ab4c70d7145be172f02be844f0b1a0972620051139b6c311b772"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840335 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840447 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840519 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tks72\" (UniqueName: \"kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840549 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840772 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840822 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.849342 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" event={"ID":"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f","Type":"ContainerStarted","Data":"89b83a73d98285f1ad5dfbcb846ef4a7cc6a0027b6f7fbb5d7b8bc7a7b615ee8"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.898205 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6mz2" event={"ID":"869a596b-159c-4185-a4ab-0e36c5d130fc","Type":"ContainerStarted","Data":"60332241610e38a80a618de620e24fb0c01532db2d0020dd0177b716555cd915"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.916336 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72" (OuterVolumeSpecName: "kube-api-access-tks72") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "kube-api-access-tks72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.946233 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.948213 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.948246 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tks72\" (UniqueName: \"kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.037181 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" event={"ID":"386c2ea0-a9e4-490b-b83d-9106af06cd60","Type":"ContainerDied","Data":"09b9ddb4df44086c306b5d7a672d610bbf5c91e71fa1fc554515dc374c5b9ffb"} Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.037292 4881 scope.go:117] "RemoveContainer" containerID="0b3499279e821abc9972417aed3d7ac5e0fad614ad777b7fffe9719ed70fc705" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.037704 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.050583 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.052553 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.127655 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.127695 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.141305 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wg7xs" podStartSLOduration=5.141274479 podStartE2EDuration="5.141274479s" podCreationTimestamp="2026-01-21 11:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:48.896600907 +0000 UTC m=+1376.156557376" watchObservedRunningTime="2026-01-21 11:19:49.141274479 +0000 UTC m=+1376.401230978" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.150926 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.230436 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.256074 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:19:49 crc kubenswrapper[4881]: E0121 11:19:49.256634 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386c2ea0-a9e4-490b-b83d-9106af06cd60" containerName="init" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.256651 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="386c2ea0-a9e4-490b-b83d-9106af06cd60" containerName="init" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.256895 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="386c2ea0-a9e4-490b-b83d-9106af06cd60" containerName="init" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.258164 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.286916 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config" (OuterVolumeSpecName: "config") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.293907 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.307534 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.332425 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.334232 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.334336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62jn4\" (UniqueName: \"kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.334465 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.340429 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.340737 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.340761 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.445079 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.445596 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.446692 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.453423 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.453567 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.453636 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62jn4\" (UniqueName: \"kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.453684 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.455186 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.463429 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.485697 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62jn4\" (UniqueName: \"kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.625424 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.627286 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.645547 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.088523 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f67997f9f-4cvfc" event={"ID":"71dc95ca-296b-4989-8b57-db806091feea","Type":"ContainerStarted","Data":"c28d2087f01d52faf0bfd56ba4bbb293832881e04f8418954c0e024ee5bf824b"} Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.116632 4881 generic.go:334] "Generic (PLEG): container finished" podID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerID="ab477504b6174b1df2cba532dc993abe653a33a827965c0d26c8c5abcd35974f" exitCode=0 Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.116707 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" event={"ID":"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f","Type":"ContainerDied","Data":"ab477504b6174b1df2cba532dc993abe653a33a827965c0d26c8c5abcd35974f"} Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.124487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6mz2" event={"ID":"869a596b-159c-4185-a4ab-0e36c5d130fc","Type":"ContainerStarted","Data":"60c7ee63bf67b35a7137c545eb5e36b0ba7f24fe96f583c9314a3bcf2ea933c6"} Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.134109 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kc9jz" event={"ID":"f568ffda-82a9-4f47-89d3-13b89a35c9b4","Type":"ContainerStarted","Data":"73872e6c614646bff532d76f6a6a2af8c1af4b2996c3b90c9492f6b03925e082"} Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.183870 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-t6mz2" podStartSLOduration=5.18384683 podStartE2EDuration="5.18384683s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:50.171379448 +0000 UTC m=+1377.431335907" watchObservedRunningTime="2026-01-21 11:19:50.18384683 +0000 UTC m=+1377.443803289" Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.293246 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:19:50 crc kubenswrapper[4881]: W0121 11:19:50.299911 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab2b33fa_d171_4525_b7a6_5bfc3a732fa4.slice/crio-c6064c9f2031907151a3a773338a4fc1c8d9b098f896f5cca5bc2a461a7bc91d WatchSource:0}: Error finding container c6064c9f2031907151a3a773338a4fc1c8d9b098f896f5cca5bc2a461a7bc91d: Status 404 returned error can't find the container with id c6064c9f2031907151a3a773338a4fc1c8d9b098f896f5cca5bc2a461a7bc91d Jan 21 11:19:51 crc kubenswrapper[4881]: I0121 11:19:51.156315 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c79cd6d5-lrpwx" event={"ID":"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4","Type":"ContainerStarted","Data":"c6064c9f2031907151a3a773338a4fc1c8d9b098f896f5cca5bc2a461a7bc91d"} Jan 21 11:19:51 crc kubenswrapper[4881]: I0121 11:19:51.328999 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="386c2ea0-a9e4-490b-b83d-9106af06cd60" path="/var/lib/kubelet/pods/386c2ea0-a9e4-490b-b83d-9106af06cd60/volumes" Jan 21 11:19:52 crc kubenswrapper[4881]: I0121 11:19:52.229063 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" event={"ID":"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f","Type":"ContainerStarted","Data":"3c2fbfa61210bf849e04651287e22b6c198d4c12ea96a2312edd5e9f291c7879"} Jan 21 11:19:52 crc kubenswrapper[4881]: I0121 11:19:52.229142 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:52 crc kubenswrapper[4881]: I0121 11:19:52.272373 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" podStartSLOduration=7.272313518 podStartE2EDuration="7.272313518s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:52.252880812 +0000 UTC m=+1379.512837281" watchObservedRunningTime="2026-01-21 11:19:52.272313518 +0000 UTC m=+1379.532269987" Jan 21 11:19:53 crc kubenswrapper[4881]: I0121 11:19:53.770992 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 11:19:53 crc kubenswrapper[4881]: I0121 11:19:53.777999 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.771255 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.789214 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.792363 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.796842 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.815754 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.860651 4881 generic.go:334] "Generic (PLEG): container finished" podID="bc7e598c-b449-4e8c-9214-44e27cb45e53" containerID="b4ed75bebc3e4f7b35b331a2f216bede613a9086f548aa45e96cbef5724a690a" exitCode=0 Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.860756 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t4mx7" event={"ID":"bc7e598c-b449-4e8c-9214-44e27cb45e53","Type":"ContainerDied","Data":"b4ed75bebc3e4f7b35b331a2f216bede613a9086f548aa45e96cbef5724a690a"} Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880017 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880111 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880184 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880313 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880392 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880510 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lfrt\" (UniqueName: \"kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984169 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984227 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984274 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984320 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984358 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984636 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lfrt\" (UniqueName: \"kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.986280 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.986606 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.987031 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.001315 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.013469 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.015691 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.038356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lfrt\" (UniqueName: \"kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.049835 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.096020 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68b447d964-6llq5"] Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.099775 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.122988 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.144654 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68b447d964-6llq5"] Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322314 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlg56\" (UniqueName: \"kubernetes.io/projected/07cdf1a8-aec4-42ca-a564-c91e7132663d-kube-api-access-rlg56\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322393 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-combined-ca-bundle\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322604 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-config-data\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322650 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07cdf1a8-aec4-42ca-a564-c91e7132663d-logs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322744 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-scripts\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322835 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-tls-certs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322864 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-secret-key\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424250 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlg56\" (UniqueName: \"kubernetes.io/projected/07cdf1a8-aec4-42ca-a564-c91e7132663d-kube-api-access-rlg56\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424610 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-combined-ca-bundle\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424665 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-config-data\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07cdf1a8-aec4-42ca-a564-c91e7132663d-logs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424732 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-scripts\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424819 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-tls-certs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424838 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-secret-key\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.426122 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07cdf1a8-aec4-42ca-a564-c91e7132663d-logs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.426951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-config-data\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.427520 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-scripts\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.432456 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-tls-certs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.445265 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-secret-key\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.461766 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-combined-ca-bundle\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.462699 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlg56\" (UniqueName: \"kubernetes.io/projected/07cdf1a8-aec4-42ca-a564-c91e7132663d-kube-api-access-rlg56\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.755739 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.095092 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.184043 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.184293 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" containerID="cri-o://942d5c3de6fa62e5024b8e526fb126bf73a64902207ddcb2a51d04aa20661a8c" gracePeriod=10 Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.338826 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.480713 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data\") pod \"bc7e598c-b449-4e8c-9214-44e27cb45e53\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.480834 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle\") pod \"bc7e598c-b449-4e8c-9214-44e27cb45e53\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.480896 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd8cs\" (UniqueName: \"kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs\") pod \"bc7e598c-b449-4e8c-9214-44e27cb45e53\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.480969 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data\") pod \"bc7e598c-b449-4e8c-9214-44e27cb45e53\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.488213 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bc7e598c-b449-4e8c-9214-44e27cb45e53" (UID: "bc7e598c-b449-4e8c-9214-44e27cb45e53"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.509109 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs" (OuterVolumeSpecName: "kube-api-access-gd8cs") pod "bc7e598c-b449-4e8c-9214-44e27cb45e53" (UID: "bc7e598c-b449-4e8c-9214-44e27cb45e53"). InnerVolumeSpecName "kube-api-access-gd8cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.527959 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc7e598c-b449-4e8c-9214-44e27cb45e53" (UID: "bc7e598c-b449-4e8c-9214-44e27cb45e53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.735419 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.735487 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.735503 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd8cs\" (UniqueName: \"kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.744175 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data" (OuterVolumeSpecName: "config-data") pod "bc7e598c-b449-4e8c-9214-44e27cb45e53" (UID: "bc7e598c-b449-4e8c-9214-44e27cb45e53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.837797 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.893155 4881 generic.go:334] "Generic (PLEG): container finished" podID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerID="942d5c3de6fa62e5024b8e526fb126bf73a64902207ddcb2a51d04aa20661a8c" exitCode=0 Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.893239 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerDied","Data":"942d5c3de6fa62e5024b8e526fb126bf73a64902207ddcb2a51d04aa20661a8c"} Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.894947 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t4mx7" event={"ID":"bc7e598c-b449-4e8c-9214-44e27cb45e53","Type":"ContainerDied","Data":"7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd"} Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.894971 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.895034 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.757916 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:19:58 crc kubenswrapper[4881]: E0121 11:19:58.758389 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" containerName="watcher-db-sync" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.758402 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" containerName="watcher-db-sync" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.758657 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" containerName="watcher-db-sync" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.759869 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.768883 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-vlkhp" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.769385 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.790290 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.806646 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.820307 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.821880 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.865925 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.865983 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.866018 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.866053 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgt4b\" (UniqueName: \"kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.866305 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.875368 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.917714 4881 generic.go:334] "Generic (PLEG): container finished" podID="cc3f2556-7427-4715-a56d-bbd3d7f8422f" containerID="20252506bf2921633b620e12ae73d258d135c6a818c92bcf4d604ddbc1f5e46d" exitCode=0 Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.917774 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wg7xs" event={"ID":"cc3f2556-7427-4715-a56d-bbd3d7f8422f","Type":"ContainerDied","Data":"20252506bf2921633b620e12ae73d258d135c6a818c92bcf4d604ddbc1f5e46d"} Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969001 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969068 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969090 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969117 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgt4b\" (UniqueName: \"kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969211 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969256 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/937bcc33-ee83-4f94-ab76-84f534cfd05a-logs\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969631 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlmkr\" (UniqueName: \"kubernetes.io/projected/937bcc33-ee83-4f94-ab76-84f534cfd05a-kube-api-access-rlmkr\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969715 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-config-data\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969745 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969777 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.974961 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.975263 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.994485 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.005051 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.010983 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.015670 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.017458 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.020699 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgt4b\" (UniqueName: \"kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.074452 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-config-data\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.074747 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.075480 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/937bcc33-ee83-4f94-ab76-84f534cfd05a-logs\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.076423 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/937bcc33-ee83-4f94-ab76-84f534cfd05a-logs\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.076541 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlmkr\" (UniqueName: \"kubernetes.io/projected/937bcc33-ee83-4f94-ab76-84f534cfd05a-kube-api-access-rlmkr\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.079376 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.079493 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-config-data\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.086714 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.097198 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlmkr\" (UniqueName: \"kubernetes.io/projected/937bcc33-ee83-4f94-ab76-84f534cfd05a-kube-api-access-rlmkr\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.147323 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.175711 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.181820 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.181936 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.181968 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.182012 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wffxr\" (UniqueName: \"kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.183423 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.196828 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.220566 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.286508 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rxx\" (UniqueName: \"kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.286890 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.287142 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.287201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.287262 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.287309 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wffxr\" (UniqueName: \"kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.287693 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.288188 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.291728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.304660 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.307298 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.312108 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wffxr\" (UniqueName: \"kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.326831 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.391114 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.391337 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79rxx\" (UniqueName: \"kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.391463 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.391610 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.391978 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.418286 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79rxx\" (UniqueName: \"kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.496743 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.526637 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:20:00 crc kubenswrapper[4881]: I0121 11:20:00.639830 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:20:00 crc kubenswrapper[4881]: I0121 11:20:00.639874 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:20:00 crc kubenswrapper[4881]: I0121 11:20:00.644249 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Jan 21 11:20:05 crc kubenswrapper[4881]: I0121 11:20:05.604033 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Jan 21 11:20:10 crc kubenswrapper[4881]: E0121 11:20:10.069075 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:10 crc kubenswrapper[4881]: E0121 11:20:10.069731 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:10 crc kubenswrapper[4881]: E0121 11:20:10.070039 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67dhc5h58dh698h669h54h5bfh557hf9h77h58bh76h5d4h67bh56fh5d9h5f5h68fh5b7h696h544h67fh5c4h56dh57dh584h556h67ch676h589h684hf7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pfn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f67997f9f-4cvfc_openstack(71dc95ca-296b-4989-8b57-db806091feea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:10 crc kubenswrapper[4881]: E0121 11:20:10.073656 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-f67997f9f-4cvfc" podUID="71dc95ca-296b-4989-8b57-db806091feea" Jan 21 11:20:10 crc kubenswrapper[4881]: I0121 11:20:10.608810 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Jan 21 11:20:10 crc kubenswrapper[4881]: I0121 11:20:10.608973 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:20:12 crc kubenswrapper[4881]: E0121 11:20:12.408440 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 21 11:20:12 crc kubenswrapper[4881]: E0121 11:20:12.408765 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 21 11:20:12 crc kubenswrapper[4881]: E0121 11:20:12.408915 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gv7qz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-kc9jz_openstack(f568ffda-82a9-4f47-89d3-13b89a35c9b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:12 crc kubenswrapper[4881]: E0121 11:20:12.410070 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-kc9jz" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" Jan 21 11:20:12 crc kubenswrapper[4881]: E0121 11:20:12.621093 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-kc9jz" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.223720 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.224107 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.224261 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvn9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-mxb97_openstack(349e8898-8b7c-414a-8357-d431c8b81bf4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.225565 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-mxb97" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.242517 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.242642 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.242808 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cfh694h8fh56ch9dh578h658h8dh58h5ch59ch5f6hd6h54ch88h57ch66fh596h8h5cbh576h547h84h5c8h654hcch55fh5b7h678h5b6h9dh78q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62jn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-67c79cd6d5-lrpwx_openstack(ab2b33fa-d171-4525-b7a6-5bfc3a732fa4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.245379 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-67c79cd6d5-lrpwx" podUID="ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.250149 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.250234 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.250382 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n77h668h67h548h54fhd5hd6h5f5h578h79h5dh87h95h99h59bh568h689h65ch5dbh74h554h5d6h5fbh9bh586h566h5b8h5f4h76h5c6h565h6bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6lrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-77fb486557-zjtxw_openstack(d96c79b7-58c4-4bcc-9e56-02f2a8860764): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.253607 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-77fb486557-zjtxw" podUID="d96c79b7-58c4-4bcc-9e56-02f2a8860764" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.318730 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375599 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375720 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vbj2\" (UniqueName: \"kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375822 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375927 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375945 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.385012 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.385753 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts" (OuterVolumeSpecName: "scripts") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.385860 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2" (OuterVolumeSpecName: "kube-api-access-6vbj2") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "kube-api-access-6vbj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.388463 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.448271 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.455971 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data" (OuterVolumeSpecName: "config-data") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483559 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vbj2\" (UniqueName: \"kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483636 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483651 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483663 4881 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483674 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483686 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.670873 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.676980 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wg7xs" event={"ID":"cc3f2556-7427-4715-a56d-bbd3d7f8422f","Type":"ContainerDied","Data":"255feaa412fc0f66dab19086ce14a7162b45237578665b2935e062ce5998cebf"} Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.677076 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="255feaa412fc0f66dab19086ce14a7162b45237578665b2935e062ce5998cebf" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.437536 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wg7xs"] Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.445829 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wg7xs"] Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.524349 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mzhtm"] Jan 21 11:20:15 crc kubenswrapper[4881]: E0121 11:20:15.524993 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3f2556-7427-4715-a56d-bbd3d7f8422f" containerName="keystone-bootstrap" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.525016 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3f2556-7427-4715-a56d-bbd3d7f8422f" containerName="keystone-bootstrap" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.525267 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc3f2556-7427-4715-a56d-bbd3d7f8422f" containerName="keystone-bootstrap" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.526207 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.530690 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.530865 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.531031 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.531172 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.531279 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j54nk" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.542044 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mzhtm"] Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.721172 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.723775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.723970 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.724118 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2wvg\" (UniqueName: \"kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.724221 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.724375 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827177 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827289 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827350 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2wvg\" (UniqueName: \"kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827390 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827462 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.833604 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.833699 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.834947 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.847236 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2wvg\" (UniqueName: \"kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.850777 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.854301 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.858895 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:17 crc kubenswrapper[4881]: I0121 11:20:17.327564 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc3f2556-7427-4715-a56d-bbd3d7f8422f" path="/var/lib/kubelet/pods/cc3f2556-7427-4715-a56d-bbd3d7f8422f/volumes" Jan 21 11:20:20 crc kubenswrapper[4881]: I0121 11:20:20.607051 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.532758 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.636776 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data\") pod \"71dc95ca-296b-4989-8b57-db806091feea\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.636966 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts\") pod \"71dc95ca-296b-4989-8b57-db806091feea\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.637187 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key\") pod \"71dc95ca-296b-4989-8b57-db806091feea\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.637245 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs\") pod \"71dc95ca-296b-4989-8b57-db806091feea\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.637318 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pfn2\" (UniqueName: \"kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2\") pod \"71dc95ca-296b-4989-8b57-db806091feea\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.637593 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts" (OuterVolumeSpecName: "scripts") pod "71dc95ca-296b-4989-8b57-db806091feea" (UID: "71dc95ca-296b-4989-8b57-db806091feea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.637749 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs" (OuterVolumeSpecName: "logs") pod "71dc95ca-296b-4989-8b57-db806091feea" (UID: "71dc95ca-296b-4989-8b57-db806091feea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.638347 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.638374 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.638564 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data" (OuterVolumeSpecName: "config-data") pod "71dc95ca-296b-4989-8b57-db806091feea" (UID: "71dc95ca-296b-4989-8b57-db806091feea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.643503 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2" (OuterVolumeSpecName: "kube-api-access-6pfn2") pod "71dc95ca-296b-4989-8b57-db806091feea" (UID: "71dc95ca-296b-4989-8b57-db806091feea"). InnerVolumeSpecName "kube-api-access-6pfn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.645542 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "71dc95ca-296b-4989-8b57-db806091feea" (UID: "71dc95ca-296b-4989-8b57-db806091feea"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.740292 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pfn2\" (UniqueName: \"kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.740336 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.740348 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.793459 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f67997f9f-4cvfc" event={"ID":"71dc95ca-296b-4989-8b57-db806091feea","Type":"ContainerDied","Data":"c28d2087f01d52faf0bfd56ba4bbb293832881e04f8418954c0e024ee5bf824b"} Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.793527 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.881750 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.889676 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:20:25 crc kubenswrapper[4881]: E0121 11:20:25.064256 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 21 11:20:25 crc kubenswrapper[4881]: E0121 11:20:25.064313 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 21 11:20:25 crc kubenswrapper[4881]: E0121 11:20:25.064456 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7pcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-slhtz_openstack(4bf52889-d5f3-44f8-b657-8ff3790962d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:25 crc kubenswrapper[4881]: E0121 11:20:25.065646 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-slhtz" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.252472 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.262502 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.296942 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353175 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353428 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353468 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4gqq\" (UniqueName: \"kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353491 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353526 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353641 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.355032 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71dc95ca-296b-4989-8b57-db806091feea" path="/var/lib/kubelet/pods/71dc95ca-296b-4989-8b57-db806091feea/volumes" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.365291 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq" (OuterVolumeSpecName: "kube-api-access-m4gqq") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "kube-api-access-m4gqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.420720 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.420892 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config" (OuterVolumeSpecName: "config") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.423681 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.432777 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.435070 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455573 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts\") pod \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455691 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key\") pod \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455740 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data\") pod \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455760 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts\") pod \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455807 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62jn4\" (UniqueName: \"kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4\") pod \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455836 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs\") pod \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455878 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6lrj\" (UniqueName: \"kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj\") pod \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455946 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key\") pod \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456019 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs\") pod \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456085 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts" (OuterVolumeSpecName: "scripts") pod "d96c79b7-58c4-4bcc-9e56-02f2a8860764" (UID: "d96c79b7-58c4-4bcc-9e56-02f2a8860764"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456109 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data\") pod \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456494 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs" (OuterVolumeSpecName: "logs") pod "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" (UID: "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456564 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts" (OuterVolumeSpecName: "scripts") pod "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" (UID: "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456766 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs" (OuterVolumeSpecName: "logs") pod "d96c79b7-58c4-4bcc-9e56-02f2a8860764" (UID: "d96c79b7-58c4-4bcc-9e56-02f2a8860764"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456834 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data" (OuterVolumeSpecName: "config-data") pod "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" (UID: "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456895 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456917 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456932 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456943 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456955 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4gqq\" (UniqueName: \"kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456969 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456980 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456993 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.457003 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.457013 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.457254 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data" (OuterVolumeSpecName: "config-data") pod "d96c79b7-58c4-4bcc-9e56-02f2a8860764" (UID: "d96c79b7-58c4-4bcc-9e56-02f2a8860764"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.460304 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4" (OuterVolumeSpecName: "kube-api-access-62jn4") pod "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" (UID: "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4"). InnerVolumeSpecName "kube-api-access-62jn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.460585 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d96c79b7-58c4-4bcc-9e56-02f2a8860764" (UID: "d96c79b7-58c4-4bcc-9e56-02f2a8860764"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.460688 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" (UID: "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.468220 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj" (OuterVolumeSpecName: "kube-api-access-t6lrj") pod "d96c79b7-58c4-4bcc-9e56-02f2a8860764" (UID: "d96c79b7-58c4-4bcc-9e56-02f2a8860764"). InnerVolumeSpecName "kube-api-access-t6lrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.559470 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.560208 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.560268 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62jn4\" (UniqueName: \"kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.560352 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6lrj\" (UniqueName: \"kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.560407 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.560469 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.612067 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.655435 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68b447d964-6llq5"] Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.805150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerDied","Data":"485dc8c96eb7030a8e95c465abb23eb90b718f53333b55d575fff9445925584c"} Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.805525 4881 scope.go:117] "RemoveContainer" containerID="942d5c3de6fa62e5024b8e526fb126bf73a64902207ddcb2a51d04aa20661a8c" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.805195 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.807595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c79cd6d5-lrpwx" event={"ID":"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4","Type":"ContainerDied","Data":"c6064c9f2031907151a3a773338a4fc1c8d9b098f896f5cca5bc2a461a7bc91d"} Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.807696 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.835421 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77fb486557-zjtxw" event={"ID":"d96c79b7-58c4-4bcc-9e56-02f2a8860764","Type":"ContainerDied","Data":"af85a7051ff9ab4c70d7145be172f02be844f0b1a0972620051139b6c311b772"} Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.835468 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:20:25 crc kubenswrapper[4881]: E0121 11:20:25.837565 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-slhtz" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.937466 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.961374 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.971085 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.979457 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.996950 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:20:26 crc kubenswrapper[4881]: I0121 11:20:26.006599 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:20:27 crc kubenswrapper[4881]: E0121 11:20:27.313075 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-mxb97" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" Jan 21 11:20:27 crc kubenswrapper[4881]: I0121 11:20:27.324408 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" path="/var/lib/kubelet/pods/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4/volumes" Jan 21 11:20:27 crc kubenswrapper[4881]: I0121 11:20:27.325025 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96c79b7-58c4-4bcc-9e56-02f2a8860764" path="/var/lib/kubelet/pods/d96c79b7-58c4-4bcc-9e56-02f2a8860764/volumes" Jan 21 11:20:27 crc kubenswrapper[4881]: I0121 11:20:27.325486 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" path="/var/lib/kubelet/pods/e51b074c-ae44-4db9-9ce6-b656a961dfaf/volumes" Jan 21 11:20:29 crc kubenswrapper[4881]: I0121 11:20:29.851319 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:20:29 crc kubenswrapper[4881]: I0121 11:20:29.851880 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:20:31 crc kubenswrapper[4881]: E0121 11:20:31.493267 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 21 11:20:31 crc kubenswrapper[4881]: E0121 11:20:31.493768 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 21 11:20:31 crc kubenswrapper[4881]: E0121 11:20:31.493984 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltkw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4wxvl_openstack(65250dcf-0f0f-4fa6-8d57-e07d3d29f290): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:31 crc kubenswrapper[4881]: E0121 11:20:31.496555 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4wxvl" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" Jan 21 11:20:31 crc kubenswrapper[4881]: W0121 11:20:31.502825 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07cdf1a8_aec4_42ca_a564_c91e7132663d.slice/crio-b3a69110b13ed57551e9e7b2d409e0ce6c41734f7980f8a68242d767ea7507c3 WatchSource:0}: Error finding container b3a69110b13ed57551e9e7b2d409e0ce6c41734f7980f8a68242d767ea7507c3: Status 404 returned error can't find the container with id b3a69110b13ed57551e9e7b2d409e0ce6c41734f7980f8a68242d767ea7507c3 Jan 21 11:20:31 crc kubenswrapper[4881]: I0121 11:20:31.520989 4881 scope.go:117] "RemoveContainer" containerID="596eab5e695f6c4af1ee0501f1a922c8b4ac8e567cedab5865035324bb33f0cb" Jan 21 11:20:31 crc kubenswrapper[4881]: I0121 11:20:31.902890 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b447d964-6llq5" event={"ID":"07cdf1a8-aec4-42ca-a564-c91e7132663d","Type":"ContainerStarted","Data":"b3a69110b13ed57551e9e7b2d409e0ce6c41734f7980f8a68242d767ea7507c3"} Jan 21 11:20:31 crc kubenswrapper[4881]: E0121 11:20:31.907808 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-4wxvl" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.039409 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.057212 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.130721 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mzhtm"] Jan 21 11:20:32 crc kubenswrapper[4881]: W0121 11:20:32.136448 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33f9442b_24ee_47d4_b914_19d32a5cad74.slice/crio-5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066 WatchSource:0}: Error finding container 5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066: Status 404 returned error can't find the container with id 5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066 Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.207323 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.218038 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.281325 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.918538 4881 generic.go:334] "Generic (PLEG): container finished" podID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerID="932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe" exitCode=0 Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.918602 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerDied","Data":"932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.919208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerStarted","Data":"a06c31c201ce60f211d95724861d78b4cdd096d87a4ed5b0a3ede7c018cd2b3c"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.924659 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"937bcc33-ee83-4f94-ab76-84f534cfd05a","Type":"ContainerStarted","Data":"997fa5dba21bdf7b6f00e7dc8dc9683ca1d4ab25cea9e4061e18e3bf275550a5"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.931079 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerStarted","Data":"04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.934194 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzhtm" event={"ID":"33f9442b-24ee-47d4-b914-19d32a5cad74","Type":"ContainerStarted","Data":"b750c2c4c79eaa65d01394c5ce39a3b9970863a1b04d7248173d08889a7ae0be"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.934223 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzhtm" event={"ID":"33f9442b-24ee-47d4-b914-19d32a5cad74","Type":"ContainerStarted","Data":"5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.936392 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerStarted","Data":"c37cb0dabfc7bd198de45353bd7d592c9381160bf0f186350e93353fe2ea4470"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.936418 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerStarted","Data":"1c1c6837f2242fbd603bbb32074adc55de9c3121097b94c5088bc30db69ba787"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.938522 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerStarted","Data":"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.939133 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerStarted","Data":"97811bb6b6cd1ac4b1dbc5094a9eed081460120416cffcb6a63fe48350301d28"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.946522 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kc9jz" event={"ID":"f568ffda-82a9-4f47-89d3-13b89a35c9b4","Type":"ContainerStarted","Data":"e31e701604fd33a6bb82c0b6900e3f3bdeaa0b71abb7488fd4edd2c71ed37a56"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.951725 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b447d964-6llq5" event={"ID":"07cdf1a8-aec4-42ca-a564-c91e7132663d","Type":"ContainerStarted","Data":"1ca550c7d5401e7c4177774caca16529ac7e810b26de193d9119b30ce371973d"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.951802 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b447d964-6llq5" event={"ID":"07cdf1a8-aec4-42ca-a564-c91e7132663d","Type":"ContainerStarted","Data":"d08b5a01336542626157ff229e969c250cd28df9c3cb1c31d812c84ee47db821"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.952658 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerStarted","Data":"29d3adbd836eae43fe470435c7cc82a51d0ed6187ef1f30da41d37c41cb401fb"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.977522 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mzhtm" podStartSLOduration=17.977498167 podStartE2EDuration="17.977498167s" podCreationTimestamp="2026-01-21 11:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:32.971544498 +0000 UTC m=+1420.231500967" watchObservedRunningTime="2026-01-21 11:20:32.977498167 +0000 UTC m=+1420.237454636" Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.998646 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68b447d964-6llq5" podStartSLOduration=38.544068883 podStartE2EDuration="38.998624995s" podCreationTimestamp="2026-01-21 11:19:54 +0000 UTC" firstStartedPulling="2026-01-21 11:20:31.521114652 +0000 UTC m=+1418.781071121" lastFinishedPulling="2026-01-21 11:20:31.975670764 +0000 UTC m=+1419.235627233" observedRunningTime="2026-01-21 11:20:32.993495937 +0000 UTC m=+1420.253452416" watchObservedRunningTime="2026-01-21 11:20:32.998624995 +0000 UTC m=+1420.258581464" Jan 21 11:20:33 crc kubenswrapper[4881]: I0121 11:20:33.025866 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-kc9jz" podStartSLOduration=5.202317943 podStartE2EDuration="48.025837116s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="2026-01-21 11:19:48.730220375 +0000 UTC m=+1375.990176844" lastFinishedPulling="2026-01-21 11:20:31.553739548 +0000 UTC m=+1418.813696017" observedRunningTime="2026-01-21 11:20:33.015578119 +0000 UTC m=+1420.275534588" watchObservedRunningTime="2026-01-21 11:20:33.025837116 +0000 UTC m=+1420.285793595" Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.977848 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerStarted","Data":"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.979121 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"937bcc33-ee83-4f94-ab76-84f534cfd05a","Type":"ContainerStarted","Data":"c3bbd97ebdf9aca32eeb94781f993e7cfdd9203a6bf9ab481c3c0b8ff6f0ae1e"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.981344 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerStarted","Data":"b14382df533ca3054b8542bddeff2d41d2f1e579142ea3b20b1a7a9c276362b8"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.983177 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerStarted","Data":"5db7a5c0d23dd82d2a5258870db858ab9345870f09ad31cd41b42f8d9eaa1f90"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.985646 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerStarted","Data":"20e9501e200b98586a1c9e7d12e2adf41d01903bd2505ab83e7f8f0fc5404f52"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.987258 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerStarted","Data":"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.987446 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.028365 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=34.925924616 podStartE2EDuration="37.028343213s" podCreationTimestamp="2026-01-21 11:19:58 +0000 UTC" firstStartedPulling="2026-01-21 11:20:32.298699645 +0000 UTC m=+1419.558656104" lastFinishedPulling="2026-01-21 11:20:34.401118232 +0000 UTC m=+1421.661074701" observedRunningTime="2026-01-21 11:20:35.023463911 +0000 UTC m=+1422.283420390" watchObservedRunningTime="2026-01-21 11:20:35.028343213 +0000 UTC m=+1422.288299682" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.059598 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=34.999073806 podStartE2EDuration="37.059581044s" podCreationTimestamp="2026-01-21 11:19:58 +0000 UTC" firstStartedPulling="2026-01-21 11:20:32.256609262 +0000 UTC m=+1419.516565731" lastFinishedPulling="2026-01-21 11:20:34.3171165 +0000 UTC m=+1421.577072969" observedRunningTime="2026-01-21 11:20:35.052488147 +0000 UTC m=+1422.312444616" watchObservedRunningTime="2026-01-21 11:20:35.059581044 +0000 UTC m=+1422.319537513" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.111719 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69c96776fd-k2z88" podStartSLOduration=41.111696658 podStartE2EDuration="41.111696658s" podCreationTimestamp="2026-01-21 11:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:35.107443062 +0000 UTC m=+1422.367399551" watchObservedRunningTime="2026-01-21 11:20:35.111696658 +0000 UTC m=+1422.371653127" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.124639 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.124740 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.133363 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=37.13333779 podStartE2EDuration="37.13333779s" podCreationTimestamp="2026-01-21 11:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:35.082445516 +0000 UTC m=+1422.342401985" watchObservedRunningTime="2026-01-21 11:20:35.13333779 +0000 UTC m=+1422.393294269" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.756746 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.756820 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:20:37 crc kubenswrapper[4881]: I0121 11:20:37.022089 4881 generic.go:334] "Generic (PLEG): container finished" podID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerID="c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af" exitCode=0 Jan 21 11:20:37 crc kubenswrapper[4881]: I0121 11:20:37.022143 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerDied","Data":"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af"} Jan 21 11:20:37 crc kubenswrapper[4881]: I0121 11:20:37.803554 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.087254 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.087382 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.092440 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.148613 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.149294 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.192744 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.498745 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.534728 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:40 crc kubenswrapper[4881]: I0121 11:20:40.051165 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:40 crc kubenswrapper[4881]: I0121 11:20:40.060039 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 11:20:40 crc kubenswrapper[4881]: I0121 11:20:40.098351 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:40 crc kubenswrapper[4881]: I0121 11:20:40.114083 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 21 11:20:41 crc kubenswrapper[4881]: I0121 11:20:41.062994 4881 generic.go:334] "Generic (PLEG): container finished" podID="33f9442b-24ee-47d4-b914-19d32a5cad74" containerID="b750c2c4c79eaa65d01394c5ce39a3b9970863a1b04d7248173d08889a7ae0be" exitCode=0 Jan 21 11:20:41 crc kubenswrapper[4881]: I0121 11:20:41.063097 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzhtm" event={"ID":"33f9442b-24ee-47d4-b914-19d32a5cad74","Type":"ContainerDied","Data":"b750c2c4c79eaa65d01394c5ce39a3b9970863a1b04d7248173d08889a7ae0be"} Jan 21 11:20:43 crc kubenswrapper[4881]: I0121 11:20:43.091620 4881 generic.go:334] "Generic (PLEG): container finished" podID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" containerID="e31e701604fd33a6bb82c0b6900e3f3bdeaa0b71abb7488fd4edd2c71ed37a56" exitCode=0 Jan 21 11:20:43 crc kubenswrapper[4881]: I0121 11:20:43.091724 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kc9jz" event={"ID":"f568ffda-82a9-4f47-89d3-13b89a35c9b4","Type":"ContainerDied","Data":"e31e701604fd33a6bb82c0b6900e3f3bdeaa0b71abb7488fd4edd2c71ed37a56"} Jan 21 11:20:43 crc kubenswrapper[4881]: I0121 11:20:43.479841 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:43 crc kubenswrapper[4881]: I0121 11:20:43.480093 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api-log" containerID="cri-o://438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4" gracePeriod=30 Jan 21 11:20:43 crc kubenswrapper[4881]: I0121 11:20:43.480178 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api" containerID="cri-o://d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396" gracePeriod=30 Jan 21 11:20:44 crc kubenswrapper[4881]: I0121 11:20:44.107663 4881 generic.go:334] "Generic (PLEG): container finished" podID="6244bcac-82b7-4bd4-b93d-3def53490380" containerID="438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4" exitCode=143 Jan 21 11:20:44 crc kubenswrapper[4881]: I0121 11:20:44.107755 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerDied","Data":"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4"} Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.127998 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.131214 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzhtm" event={"ID":"33f9442b-24ee-47d4-b914-19d32a5cad74","Type":"ContainerDied","Data":"5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066"} Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.131346 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.138505 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kc9jz" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.139122 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kc9jz" event={"ID":"f568ffda-82a9-4f47-89d3-13b89a35c9b4","Type":"ContainerDied","Data":"73872e6c614646bff532d76f6a6a2af8c1af4b2996c3b90c9492f6b03925e082"} Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.139211 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73872e6c614646bff532d76f6a6a2af8c1af4b2996c3b90c9492f6b03925e082" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.139907 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.140372 4881 generic.go:334] "Generic (PLEG): container finished" podID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerID="5db7a5c0d23dd82d2a5258870db858ab9345870f09ad31cd41b42f8d9eaa1f90" exitCode=1 Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.140456 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerDied","Data":"5db7a5c0d23dd82d2a5258870db858ab9345870f09ad31cd41b42f8d9eaa1f90"} Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.141209 4881 scope.go:117] "RemoveContainer" containerID="5db7a5c0d23dd82d2a5258870db858ab9345870f09ad31cd41b42f8d9eaa1f90" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.158357 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2wvg\" (UniqueName: \"kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.158651 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.158770 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle\") pod \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.158909 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159129 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159194 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159282 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs\") pod \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159355 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv7qz\" (UniqueName: \"kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz\") pod \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159460 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159560 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data\") pod \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.160075 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts\") pod \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.168475 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs" (OuterVolumeSpecName: "logs") pod "f568ffda-82a9-4f47-89d3-13b89a35c9b4" (UID: "f568ffda-82a9-4f47-89d3-13b89a35c9b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.169913 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.170383 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.172843 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg" (OuterVolumeSpecName: "kube-api-access-n2wvg") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "kube-api-access-n2wvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.177463 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts" (OuterVolumeSpecName: "scripts") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.178126 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz" (OuterVolumeSpecName: "kube-api-access-gv7qz") pod "f568ffda-82a9-4f47-89d3-13b89a35c9b4" (UID: "f568ffda-82a9-4f47-89d3-13b89a35c9b4"). InnerVolumeSpecName "kube-api-access-gv7qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.193207 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts" (OuterVolumeSpecName: "scripts") pod "f568ffda-82a9-4f47-89d3-13b89a35c9b4" (UID: "f568ffda-82a9-4f47-89d3-13b89a35c9b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.209047 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data" (OuterVolumeSpecName: "config-data") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.213114 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.248097 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data" (OuterVolumeSpecName: "config-data") pod "f568ffda-82a9-4f47-89d3-13b89a35c9b4" (UID: "f568ffda-82a9-4f47-89d3-13b89a35c9b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268221 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268262 4881 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268273 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv7qz\" (UniqueName: \"kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268284 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268295 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268304 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268312 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268321 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2wvg\" (UniqueName: \"kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268329 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268337 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.291935 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f568ffda-82a9-4f47-89d3-13b89a35c9b4" (UID: "f568ffda-82a9-4f47-89d3-13b89a35c9b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.370469 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.518205 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": read tcp 10.217.0.2:50344->10.217.0.162:9322: read: connection reset by peer" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.518259 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": read tcp 10.217.0.2:50334->10.217.0.162:9322: read: connection reset by peer" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.766049 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b447d964-6llq5" podUID="07cdf1a8-aec4-42ca-a564-c91e7132663d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.221317 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerStarted","Data":"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.232863 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-slhtz" event={"ID":"4bf52889-d5f3-44f8-b657-8ff3790962d1","Type":"ContainerStarted","Data":"3a796b1b54b7432132400a5a214afb4cf61aaada5f5054cc747d5e74194d9dae"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.225678 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.293692 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerStarted","Data":"ca18caa0fee509128e7ffae2755d6b5b1126bfe1c63366090fd0947db93d8443"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.332079 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs\") pod \"6244bcac-82b7-4bd4-b93d-3def53490380\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.332156 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data\") pod \"6244bcac-82b7-4bd4-b93d-3def53490380\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.332207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca\") pod \"6244bcac-82b7-4bd4-b93d-3def53490380\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.332231 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle\") pod \"6244bcac-82b7-4bd4-b93d-3def53490380\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.332327 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgt4b\" (UniqueName: \"kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b\") pod \"6244bcac-82b7-4bd4-b93d-3def53490380\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.333959 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs" (OuterVolumeSpecName: "logs") pod "6244bcac-82b7-4bd4-b93d-3def53490380" (UID: "6244bcac-82b7-4bd4-b93d-3def53490380"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.340136 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerStarted","Data":"61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.359986 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b" (OuterVolumeSpecName: "kube-api-access-sgt4b") pod "6244bcac-82b7-4bd4-b93d-3def53490380" (UID: "6244bcac-82b7-4bd4-b93d-3def53490380"). InnerVolumeSpecName "kube-api-access-sgt4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.386066 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ncbfx" podStartSLOduration=41.675279952 podStartE2EDuration="47.386036259s" podCreationTimestamp="2026-01-21 11:19:59 +0000 UTC" firstStartedPulling="2026-01-21 11:20:33.224095146 +0000 UTC m=+1420.484051615" lastFinishedPulling="2026-01-21 11:20:38.934851453 +0000 UTC m=+1426.194807922" observedRunningTime="2026-01-21 11:20:46.260536945 +0000 UTC m=+1433.520493424" watchObservedRunningTime="2026-01-21 11:20:46.386036259 +0000 UTC m=+1433.645992728" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.399303 4881 generic.go:334] "Generic (PLEG): container finished" podID="6244bcac-82b7-4bd4-b93d-3def53490380" containerID="d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396" exitCode=0 Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.401006 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.399383 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.402040 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kc9jz" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.399407 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerDied","Data":"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.402481 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerDied","Data":"97811bb6b6cd1ac4b1dbc5094a9eed081460120416cffcb6a63fe48350301d28"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.402501 4881 scope.go:117] "RemoveContainer" containerID="d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.429943 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "6244bcac-82b7-4bd4-b93d-3def53490380" (UID: "6244bcac-82b7-4bd4-b93d-3def53490380"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.437591 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-slhtz" podStartSLOduration=3.811970868 podStartE2EDuration="1m1.437565981s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="2026-01-21 11:19:47.926706403 +0000 UTC m=+1375.186662872" lastFinishedPulling="2026-01-21 11:20:45.552301516 +0000 UTC m=+1432.812257985" observedRunningTime="2026-01-21 11:20:46.300053778 +0000 UTC m=+1433.560010247" watchObservedRunningTime="2026-01-21 11:20:46.437565981 +0000 UTC m=+1433.697522450" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.440958 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.441237 4881 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.441252 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgt4b\" (UniqueName: \"kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.466855 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-857c5cc966-ggkc4"] Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467357 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f9442b-24ee-47d4-b914-19d32a5cad74" containerName="keystone-bootstrap" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467377 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f9442b-24ee-47d4-b914-19d32a5cad74" containerName="keystone-bootstrap" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467395 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="init" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467404 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="init" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467419 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467429 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467454 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api-log" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467462 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api-log" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467486 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" containerName="placement-db-sync" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467494 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" containerName="placement-db-sync" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467514 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467520 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467724 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" containerName="placement-db-sync" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467744 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467760 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467794 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api-log" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467818 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f9442b-24ee-47d4-b914-19d32a5cad74" containerName="keystone-bootstrap" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.468595 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.475040 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.475254 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6244bcac-82b7-4bd4-b93d-3def53490380" (UID: "6244bcac-82b7-4bd4-b93d-3def53490380"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.475504 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.475597 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.475654 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j54nk" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.479181 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.484437 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-59bf6c8c7b-wvc46"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.488921 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.489659 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.501755 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-857c5cc966-ggkc4"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.501775 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data" (OuterVolumeSpecName: "config-data") pod "6244bcac-82b7-4bd4-b93d-3def53490380" (UID: "6244bcac-82b7-4bd4-b93d-3def53490380"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.504611 4881 scope.go:117] "RemoveContainer" containerID="438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.506258 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.506281 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.506434 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.506632 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.506895 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dndng" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543171 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpfnt\" (UniqueName: \"kubernetes.io/projected/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-kube-api-access-jpfnt\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543227 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-config-data\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543259 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-scripts\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543292 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9358f706-24c3-46c5-8490-89402a85e9a4-logs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543324 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-public-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543348 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jns\" (UniqueName: \"kubernetes.io/projected/9358f706-24c3-46c5-8490-89402a85e9a4-kube-api-access-f6jns\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543406 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-credential-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543428 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-public-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543506 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-combined-ca-bundle\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543574 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-internal-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543629 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-fernet-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543667 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-combined-ca-bundle\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543708 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-scripts\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543739 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-internal-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543761 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-config-data\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543899 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543916 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.563993 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59bf6c8c7b-wvc46"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.587618 4881 scope.go:117] "RemoveContainer" containerID="d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.591926 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396\": container with ID starting with d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396 not found: ID does not exist" containerID="d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.591984 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396"} err="failed to get container status \"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396\": rpc error: code = NotFound desc = could not find container \"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396\": container with ID starting with d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396 not found: ID does not exist" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.592021 4881 scope.go:117] "RemoveContainer" containerID="438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.593948 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4\": container with ID starting with 438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4 not found: ID does not exist" containerID="438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.593981 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4"} err="failed to get container status \"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4\": rpc error: code = NotFound desc = could not find container \"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4\": container with ID starting with 438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4 not found: ID does not exist" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645412 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-fernet-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-combined-ca-bundle\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645507 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-scripts\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645538 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-internal-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-config-data\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645616 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpfnt\" (UniqueName: \"kubernetes.io/projected/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-kube-api-access-jpfnt\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645644 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-config-data\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645667 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-scripts\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9358f706-24c3-46c5-8490-89402a85e9a4-logs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645718 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-public-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645744 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6jns\" (UniqueName: \"kubernetes.io/projected/9358f706-24c3-46c5-8490-89402a85e9a4-kube-api-access-f6jns\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645808 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-credential-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645831 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-public-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645900 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-combined-ca-bundle\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645951 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-internal-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.649854 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9358f706-24c3-46c5-8490-89402a85e9a4-logs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.656992 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-combined-ca-bundle\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.657508 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-fernet-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.659111 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-combined-ca-bundle\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.659352 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-scripts\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.659419 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-internal-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.660015 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-credential-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.660990 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-internal-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.662376 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-config-data\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.664461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-scripts\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.664631 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-public-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.665619 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-public-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.668156 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-config-data\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.668749 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6jns\" (UniqueName: \"kubernetes.io/projected/9358f706-24c3-46c5-8490-89402a85e9a4-kube-api-access-f6jns\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.671149 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpfnt\" (UniqueName: \"kubernetes.io/projected/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-kube-api-access-jpfnt\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.737326 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.746135 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.763090 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.773555 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.779650 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.779859 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.779953 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.793247 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.847416 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.857981 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmr72\" (UniqueName: \"kubernetes.io/projected/bf14e65c-4c95-4766-a2e2-57b040e9f192-kube-api-access-qmr72\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858063 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf14e65c-4c95-4766-a2e2-57b040e9f192-logs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858090 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-config-data\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858141 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858237 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858287 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858373 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-public-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.883756 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959756 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf14e65c-4c95-4766-a2e2-57b040e9f192-logs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959819 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-config-data\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959853 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959908 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959931 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959975 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-public-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.960057 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmr72\" (UniqueName: \"kubernetes.io/projected/bf14e65c-4c95-4766-a2e2-57b040e9f192-kube-api-access-qmr72\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.960211 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf14e65c-4c95-4766-a2e2-57b040e9f192-logs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.970745 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.971494 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-config-data\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.971858 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.978484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmr72\" (UniqueName: \"kubernetes.io/projected/bf14e65c-4c95-4766-a2e2-57b040e9f192-kube-api-access-qmr72\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.979993 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.981496 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-public-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.125543 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.329482 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" path="/var/lib/kubelet/pods/6244bcac-82b7-4bd4-b93d-3def53490380/volumes" Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.489384 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxb97" event={"ID":"349e8898-8b7c-414a-8357-d431c8b81bf4","Type":"ContainerStarted","Data":"c648692c811ad6f54f474e55240cf83d10bccce020989330faa953f52c62836c"} Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.501142 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-857c5cc966-ggkc4"] Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.552400 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mxb97" podStartSLOduration=3.578856455 podStartE2EDuration="1m26.552374701s" podCreationTimestamp="2026-01-21 11:19:21 +0000 UTC" firstStartedPulling="2026-01-21 11:19:22.581563109 +0000 UTC m=+1349.841519568" lastFinishedPulling="2026-01-21 11:20:45.555081345 +0000 UTC m=+1432.815037814" observedRunningTime="2026-01-21 11:20:47.535957912 +0000 UTC m=+1434.795914381" watchObservedRunningTime="2026-01-21 11:20:47.552374701 +0000 UTC m=+1434.812331180" Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.630741 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59bf6c8c7b-wvc46"] Jan 21 11:20:47 crc kubenswrapper[4881]: W0121 11:20:47.705740 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9358f706_24c3_46c5_8490_89402a85e9a4.slice/crio-51cdd1269b38f5140e053e8d16ad4f55fb2eb455fa7567d79efdfa9a592d3a75 WatchSource:0}: Error finding container 51cdd1269b38f5140e053e8d16ad4f55fb2eb455fa7567d79efdfa9a592d3a75: Status 404 returned error can't find the container with id 51cdd1269b38f5140e053e8d16ad4f55fb2eb455fa7567d79efdfa9a592d3a75 Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.174525 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.506551 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bf14e65c-4c95-4766-a2e2-57b040e9f192","Type":"ContainerStarted","Data":"6b80183fa2b269acf09d29b84e08613370a4044c48a698df3a6c8b59e8ebfec7"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.509681 4881 generic.go:334] "Generic (PLEG): container finished" podID="869a596b-159c-4185-a4ab-0e36c5d130fc" containerID="60c7ee63bf67b35a7137c545eb5e36b0ba7f24fe96f583c9314a3bcf2ea933c6" exitCode=0 Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.509747 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6mz2" event={"ID":"869a596b-159c-4185-a4ab-0e36c5d130fc","Type":"ContainerDied","Data":"60c7ee63bf67b35a7137c545eb5e36b0ba7f24fe96f583c9314a3bcf2ea933c6"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.521876 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59bf6c8c7b-wvc46" event={"ID":"9358f706-24c3-46c5-8490-89402a85e9a4","Type":"ContainerStarted","Data":"f5edee1d07e346d14eb5323aedec597a7a2da39a3e6b4d62b96bd2921e5c2f54"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.521920 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59bf6c8c7b-wvc46" event={"ID":"9358f706-24c3-46c5-8490-89402a85e9a4","Type":"ContainerStarted","Data":"51cdd1269b38f5140e053e8d16ad4f55fb2eb455fa7567d79efdfa9a592d3a75"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.536761 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wxvl" event={"ID":"65250dcf-0f0f-4fa6-8d57-e07d3d29f290","Type":"ContainerStarted","Data":"6641f95a17dea3fe9aff6d4faf3bd17425257c19253868f2b83b7d7d759a48fd"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.551952 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-857c5cc966-ggkc4" event={"ID":"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952","Type":"ContainerStarted","Data":"5fb9d1c4eabc2cf0819a1fa3677c7d9fe8945f3612149fe9af8c01e80ad3006a"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.551991 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-857c5cc966-ggkc4" event={"ID":"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952","Type":"ContainerStarted","Data":"89e3b8f2fee171d30e8a7e5bbdb1527af0e178f6abf0bb7076780ed8e2c03cd2"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.558321 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.583312 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4wxvl" podStartSLOduration=6.123219748 podStartE2EDuration="1m3.583291221s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="2026-01-21 11:19:48.090696275 +0000 UTC m=+1375.350652744" lastFinishedPulling="2026-01-21 11:20:45.550767748 +0000 UTC m=+1432.810724217" observedRunningTime="2026-01-21 11:20:48.567804855 +0000 UTC m=+1435.827761324" watchObservedRunningTime="2026-01-21 11:20:48.583291221 +0000 UTC m=+1435.843247690" Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.615255 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-857c5cc966-ggkc4" podStartSLOduration=2.6152269759999998 podStartE2EDuration="2.615226976s" podCreationTimestamp="2026-01-21 11:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:48.596655484 +0000 UTC m=+1435.856611963" watchObservedRunningTime="2026-01-21 11:20:48.615226976 +0000 UTC m=+1435.875183445" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.498668 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.527054 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.528841 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.534315 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.587725 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59bf6c8c7b-wvc46" event={"ID":"9358f706-24c3-46c5-8490-89402a85e9a4","Type":"ContainerStarted","Data":"efb662df28813811348cba77f05d7d8acb958e1416f129d11a16e0b31591d4b8"} Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.587837 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.587875 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.591161 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bf14e65c-4c95-4766-a2e2-57b040e9f192","Type":"ContainerStarted","Data":"8a5aad798e8071a262f3a24177b130f3e97233d2d837f365a875625312c98420"} Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.591201 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bf14e65c-4c95-4766-a2e2-57b040e9f192","Type":"ContainerStarted","Data":"a3b941a2ad0b66190a31ef6f2915a1a156d561cd57311dc9b96d730cd5bfc66c"} Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.591630 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.624198 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-59bf6c8c7b-wvc46" podStartSLOduration=3.624175069 podStartE2EDuration="3.624175069s" podCreationTimestamp="2026-01-21 11:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:49.615190636 +0000 UTC m=+1436.875147105" watchObservedRunningTime="2026-01-21 11:20:49.624175069 +0000 UTC m=+1436.884131538" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.659116 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.667402 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.667376984 podStartE2EDuration="3.667376984s" podCreationTimestamp="2026-01-21 11:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:49.644385532 +0000 UTC m=+1436.904342001" watchObservedRunningTime="2026-01-21 11:20:49.667376984 +0000 UTC m=+1436.927333443" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.091887 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.281509 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config\") pod \"869a596b-159c-4185-a4ab-0e36c5d130fc\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.281734 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle\") pod \"869a596b-159c-4185-a4ab-0e36c5d130fc\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.281780 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dscc6\" (UniqueName: \"kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6\") pod \"869a596b-159c-4185-a4ab-0e36c5d130fc\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.304552 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6" (OuterVolumeSpecName: "kube-api-access-dscc6") pod "869a596b-159c-4185-a4ab-0e36c5d130fc" (UID: "869a596b-159c-4185-a4ab-0e36c5d130fc"). InnerVolumeSpecName "kube-api-access-dscc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.312646 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config" (OuterVolumeSpecName: "config") pod "869a596b-159c-4185-a4ab-0e36c5d130fc" (UID: "869a596b-159c-4185-a4ab-0e36c5d130fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.313100 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "869a596b-159c-4185-a4ab-0e36c5d130fc" (UID: "869a596b-159c-4185-a4ab-0e36c5d130fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.385246 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.385312 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dscc6\" (UniqueName: \"kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.385330 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.606464 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.615873 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6mz2" event={"ID":"869a596b-159c-4185-a4ab-0e36c5d130fc","Type":"ContainerDied","Data":"60332241610e38a80a618de620e24fb0c01532db2d0020dd0177b716555cd915"} Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.615932 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60332241610e38a80a618de620e24fb0c01532db2d0020dd0177b716555cd915" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.616287 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.636552 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ncbfx" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" probeResult="failure" output=< Jan 21 11:20:50 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:20:50 crc kubenswrapper[4881]: > Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.891135 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:20:50 crc kubenswrapper[4881]: E0121 11:20:50.897853 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869a596b-159c-4185-a4ab-0e36c5d130fc" containerName="neutron-db-sync" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.898281 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="869a596b-159c-4185-a4ab-0e36c5d130fc" containerName="neutron-db-sync" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.904088 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="869a596b-159c-4185-a4ab-0e36c5d130fc" containerName="neutron-db-sync" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.910250 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.991522 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003189 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003346 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcslt\" (UniqueName: \"kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003434 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003482 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003507 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003538 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.022948 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.024981 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.031682 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kj7bj" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.032481 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.033263 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.033580 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.041405 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.105925 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcslt\" (UniqueName: \"kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.106021 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.106080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.106101 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.106129 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.106214 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.107497 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.107687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.107880 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.108024 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.108132 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.131712 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcslt\" (UniqueName: \"kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.207994 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.208067 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.208147 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.208561 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.208606 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgwv9\" (UniqueName: \"kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.275193 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.311413 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.312186 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgwv9\" (UniqueName: \"kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.312303 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.312346 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.312476 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.318897 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.319549 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.323329 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.327915 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.331426 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgwv9\" (UniqueName: \"kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.363600 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.972532 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:20:51 crc kubenswrapper[4881]: W0121 11:20:51.985675 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a4d2e63_3d53_44ef_8968_22a7ced8d0fe.slice/crio-53bbfd2a49add8edadc389aeebfde92d8828c88f0f666671d93498d8d53c2567 WatchSource:0}: Error finding container 53bbfd2a49add8edadc389aeebfde92d8828c88f0f666671d93498d8d53c2567: Status 404 returned error can't find the container with id 53bbfd2a49add8edadc389aeebfde92d8828c88f0f666671d93498d8d53c2567 Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.129813 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.226858 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:20:52 crc kubenswrapper[4881]: W0121 11:20:52.254484 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf51f915e_f553_4130_a16b_9e6af68a5a15.slice/crio-2e4be17fa483a6184f2eda034f9fc33ec23230c3292d5bb3f6f80cd50bfff6e9 WatchSource:0}: Error finding container 2e4be17fa483a6184f2eda034f9fc33ec23230c3292d5bb3f6f80cd50bfff6e9: Status 404 returned error can't find the container with id 2e4be17fa483a6184f2eda034f9fc33ec23230c3292d5bb3f6f80cd50bfff6e9 Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.643863 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerStarted","Data":"2e4be17fa483a6184f2eda034f9fc33ec23230c3292d5bb3f6f80cd50bfff6e9"} Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.647830 4881 generic.go:334] "Generic (PLEG): container finished" podID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerID="fa55b39990f74afb936b29eb6ca3dc719ebcf2a4b47a29af77516eac502e8d26" exitCode=0 Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.647933 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" event={"ID":"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe","Type":"ContainerDied","Data":"fa55b39990f74afb936b29eb6ca3dc719ebcf2a4b47a29af77516eac502e8d26"} Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.647966 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" event={"ID":"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe","Type":"ContainerStarted","Data":"53bbfd2a49add8edadc389aeebfde92d8828c88f0f666671d93498d8d53c2567"} Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.652887 4881 generic.go:334] "Generic (PLEG): container finished" podID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerID="61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445" exitCode=1 Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.652985 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.653174 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerDied","Data":"61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445"} Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.653224 4881 scope.go:117] "RemoveContainer" containerID="5db7a5c0d23dd82d2a5258870db858ab9345870f09ad31cd41b42f8d9eaa1f90" Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.653580 4881 scope.go:117] "RemoveContainer" containerID="61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445" Jan 21 11:20:52 crc kubenswrapper[4881]: E0121 11:20:52.653775 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ee4e7116-c2cd-43d5-af6b-9f30b5053e0e)\"" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.376948 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-667d9dbbbc-pcbhd"] Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.397412 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.400154 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.429510 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.441349 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-667d9dbbbc-pcbhd"] Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509081 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509500 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-ovndb-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509654 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-httpd-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509749 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-internal-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509851 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-public-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509930 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-combined-ca-bundle\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.510070 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdf29\" (UniqueName: \"kubernetes.io/projected/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-kube-api-access-fdf29\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612066 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-httpd-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612128 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-internal-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612163 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-combined-ca-bundle\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612177 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-public-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdf29\" (UniqueName: \"kubernetes.io/projected/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-kube-api-access-fdf29\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612300 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612345 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-ovndb-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.618879 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-ovndb-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.622709 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-httpd-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.623135 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-combined-ca-bundle\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.627654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-internal-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.628356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-public-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.637650 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.640666 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdf29\" (UniqueName: \"kubernetes.io/projected/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-kube-api-access-fdf29\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.673105 4881 generic.go:334] "Generic (PLEG): container finished" podID="4bf52889-d5f3-44f8-b657-8ff3790962d1" containerID="3a796b1b54b7432132400a5a214afb4cf61aaada5f5054cc747d5e74194d9dae" exitCode=0 Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.673215 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-slhtz" event={"ID":"4bf52889-d5f3-44f8-b657-8ff3790962d1","Type":"ContainerDied","Data":"3a796b1b54b7432132400a5a214afb4cf61aaada5f5054cc747d5e74194d9dae"} Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.681145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerStarted","Data":"3a9e17862c5ff2f64ddcb7cb3eb9d73424fbbcd62c695e9a6f00fe4f1a20f86b"} Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.766877 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:54 crc kubenswrapper[4881]: I0121 11:20:54.471586 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 11:20:57 crc kubenswrapper[4881]: I0121 11:20:57.127877 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 21 11:20:57 crc kubenswrapper[4881]: I0121 11:20:57.173283 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 21 11:20:57 crc kubenswrapper[4881]: I0121 11:20:57.727654 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:20:57 crc kubenswrapper[4881]: I0121 11:20:57.755306 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 11:20:58 crc kubenswrapper[4881]: I0121 11:20:58.003410 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:20:58 crc kubenswrapper[4881]: I0121 11:20:58.964167 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-slhtz" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.058448 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data\") pod \"4bf52889-d5f3-44f8-b657-8ff3790962d1\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.058659 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle\") pod \"4bf52889-d5f3-44f8-b657-8ff3790962d1\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.058825 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7pcb\" (UniqueName: \"kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb\") pod \"4bf52889-d5f3-44f8-b657-8ff3790962d1\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.068619 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb" (OuterVolumeSpecName: "kube-api-access-j7pcb") pod "4bf52889-d5f3-44f8-b657-8ff3790962d1" (UID: "4bf52889-d5f3-44f8-b657-8ff3790962d1"). InnerVolumeSpecName "kube-api-access-j7pcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.071394 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4bf52889-d5f3-44f8-b657-8ff3790962d1" (UID: "4bf52889-d5f3-44f8-b657-8ff3790962d1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.109964 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bf52889-d5f3-44f8-b657-8ff3790962d1" (UID: "4bf52889-d5f3-44f8-b657-8ff3790962d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.165474 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.165509 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.165523 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7pcb\" (UniqueName: \"kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.498529 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.498592 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.498610 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.499509 4881 scope.go:117] "RemoveContainer" containerID="61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445" Jan 21 11:20:59 crc kubenswrapper[4881]: E0121 11:20:59.499921 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ee4e7116-c2cd-43d5-af6b-9f30b5053e0e)\"" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.818305 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-slhtz" event={"ID":"4bf52889-d5f3-44f8-b657-8ff3790962d1","Type":"ContainerDied","Data":"370f02f399b03911d8ee654e46609c08288e0d57caf3655dba13b0b2e545df19"} Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.818686 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="370f02f399b03911d8ee654e46609c08288e0d57caf3655dba13b0b2e545df19" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.818767 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-slhtz" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.851701 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.851989 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.852134 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.852268 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.853173 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.853232 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a" gracePeriod=600 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.069301 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.149874 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.318836 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-55755579c5-csgz2"] Jan 21 11:21:00 crc kubenswrapper[4881]: E0121 11:21:00.320193 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" containerName="barbican-db-sync" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.320223 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" containerName="barbican-db-sync" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.320539 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" containerName="barbican-db-sync" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.329630 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.333830 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.334099 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cl6xz" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.335196 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.345836 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data-custom\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.346028 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6982w\" (UniqueName: \"kubernetes.io/projected/90253f07-2dfb-48b3-9b75-34a653836589-kube-api-access-6982w\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.346109 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.346288 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90253f07-2dfb-48b3-9b75-34a653836589-logs\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.346654 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-combined-ca-bundle\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.347325 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-54f549c774-rnptw"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.361957 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55755579c5-csgz2"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.362168 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.370884 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.380709 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54f549c774-rnptw"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.436466 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-667d9dbbbc-pcbhd"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.458397 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data-custom\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.458587 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6982w\" (UniqueName: \"kubernetes.io/projected/90253f07-2dfb-48b3-9b75-34a653836589-kube-api-access-6982w\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.458673 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.458706 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90253f07-2dfb-48b3-9b75-34a653836589-logs\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.458844 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-combined-ca-bundle\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.460120 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90253f07-2dfb-48b3-9b75-34a653836589-logs\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.467291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-combined-ca-bundle\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.482511 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data-custom\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.492569 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6982w\" (UniqueName: \"kubernetes.io/projected/90253f07-2dfb-48b3-9b75-34a653836589-kube-api-access-6982w\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.495477 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.556470 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.572099 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e80f53a-8873-4c07-b738-2854d9b8b089-logs\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.572228 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2gq\" (UniqueName: \"kubernetes.io/projected/6e80f53a-8873-4c07-b738-2854d9b8b089-kube-api-access-wb2gq\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.572306 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.572428 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-combined-ca-bundle\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.572566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data-custom\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.583779 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.587087 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.588244 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ncbfx" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" probeResult="failure" output=< Jan 21 11:21:00 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:21:00 crc kubenswrapper[4881]: > Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.633688 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.652373 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.675090 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e80f53a-8873-4c07-b738-2854d9b8b089-logs\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.675482 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2gq\" (UniqueName: \"kubernetes.io/projected/6e80f53a-8873-4c07-b738-2854d9b8b089-kube-api-access-wb2gq\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.675533 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.675589 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-combined-ca-bundle\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.675640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data-custom\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.676564 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e80f53a-8873-4c07-b738-2854d9b8b089-logs\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.683898 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-combined-ca-bundle\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.685561 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.689471 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data-custom\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.704136 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.706376 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.709220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2gq\" (UniqueName: \"kubernetes.io/projected/6e80f53a-8873-4c07-b738-2854d9b8b089-kube-api-access-wb2gq\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.716652 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.721959 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.778759 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.778863 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.779034 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.779142 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.779250 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.779623 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwwk\" (UniqueName: \"kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.856091 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerStarted","Data":"d69bb72f9eba472479b5b854a392dd678dcf12a1e5ab100dffbf954eda114573"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.856317 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.870085 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" event={"ID":"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe","Type":"ContainerStarted","Data":"502e6f906f1978cd73b6fd52aa270b0a25fe565d624b6874af91148a542bee58"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.870909 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.882940 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfwwk\" (UniqueName: \"kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.882997 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883036 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883064 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883102 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsnsj\" (UniqueName: \"kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883146 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883170 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883208 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883237 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883287 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.884728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.885817 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.889846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerStarted","Data":"7a2597fbfe970937452b64ccef79f25aaeee72972449d78e0549c998d5351134"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890247 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-central-agent" containerID="cri-o://04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890289 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="sg-core" containerID="cri-o://ca18caa0fee509128e7ffae2755d6b5b1126bfe1c63366090fd0947db93d8443" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890276 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890385 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-notification-agent" containerID="cri-o://b14382df533ca3054b8542bddeff2d41d2f1e579142ea3b20b1a7a9c276362b8" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890492 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890440 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="proxy-httpd" containerID="cri-o://7a2597fbfe970937452b64ccef79f25aaeee72972449d78e0549c998d5351134" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.895045 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.894442 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.904521 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-796dd99876-gb7nt" podStartSLOduration=10.904498468 podStartE2EDuration="10.904498468s" podCreationTimestamp="2026-01-21 11:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:00.889106745 +0000 UTC m=+1448.149063214" watchObservedRunningTime="2026-01-21 11:21:00.904498468 +0000 UTC m=+1448.164454937" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.914424 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-667d9dbbbc-pcbhd" event={"ID":"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9","Type":"ContainerStarted","Data":"5c0eca339ef26596d70dd7e8649e504d14255f85aa417abb37517636935e7473"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.925208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfwwk\" (UniqueName: \"kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.946220 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" podStartSLOduration=10.946195106 podStartE2EDuration="10.946195106s" podCreationTimestamp="2026-01-21 11:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:00.922569668 +0000 UTC m=+1448.182526137" watchObservedRunningTime="2026-01-21 11:21:00.946195106 +0000 UTC m=+1448.206151585" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.957127 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.975569 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a" exitCode=0 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.975844 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon-log" containerID="cri-o://c37cb0dabfc7bd198de45353bd7d592c9381160bf0f186350e93353fe2ea4470" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.976131 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.976185 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.976208 4881 scope.go:117] "RemoveContainer" containerID="d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.976658 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" containerID="cri-o://20e9501e200b98586a1c9e7d12e2adf41d01903bd2505ab83e7f8f0fc5404f52" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.987678 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.987735 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.988032 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.988227 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsnsj\" (UniqueName: \"kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.988372 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.992107 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:00.997220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:00.997713 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.006884 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.019804 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.022427 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.022803688 podStartE2EDuration="1m16.022404422s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="2026-01-21 11:19:47.861601855 +0000 UTC m=+1375.121558324" lastFinishedPulling="2026-01-21 11:20:59.861202589 +0000 UTC m=+1447.121159058" observedRunningTime="2026-01-21 11:21:00.972246634 +0000 UTC m=+1448.232203103" watchObservedRunningTime="2026-01-21 11:21:01.022404422 +0000 UTC m=+1448.282360891" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.022905 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsnsj\" (UniqueName: \"kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.054848 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.308099 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55755579c5-csgz2"] Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.644296 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:01 crc kubenswrapper[4881]: W0121 11:21:01.645926 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2ecfd63_c654_42e9_b324_22c02d21b506.slice/crio-b2d41124075aed0e5d3723eb39479bb34ae77563466138e26829e292a42a163c WatchSource:0}: Error finding container b2d41124075aed0e5d3723eb39479bb34ae77563466138e26829e292a42a163c: Status 404 returned error can't find the container with id b2d41124075aed0e5d3723eb39479bb34ae77563466138e26829e292a42a163c Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.837748 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54f549c774-rnptw"] Jan 21 11:21:01 crc kubenswrapper[4881]: E0121 11:21:01.852900 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcec3c24_87bd_4c22_a800_d3835455a38b.slice/crio-04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016390 4881 generic.go:334] "Generic (PLEG): container finished" podID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerID="7a2597fbfe970937452b64ccef79f25aaeee72972449d78e0549c998d5351134" exitCode=0 Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016826 4881 generic.go:334] "Generic (PLEG): container finished" podID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerID="ca18caa0fee509128e7ffae2755d6b5b1126bfe1c63366090fd0947db93d8443" exitCode=2 Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016835 4881 generic.go:334] "Generic (PLEG): container finished" podID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerID="04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706" exitCode=0 Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerDied","Data":"7a2597fbfe970937452b64ccef79f25aaeee72972449d78e0549c998d5351134"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016949 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerDied","Data":"ca18caa0fee509128e7ffae2755d6b5b1126bfe1c63366090fd0947db93d8443"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016960 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerDied","Data":"04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.028539 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.038353 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" event={"ID":"6e80f53a-8873-4c07-b738-2854d9b8b089","Type":"ContainerStarted","Data":"8a851cacbff6f63fdcd19b9d99dcd44f0beccc9b727794d59895fbd1d06b5e2b"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.046567 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-667d9dbbbc-pcbhd" event={"ID":"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9","Type":"ContainerStarted","Data":"7df1602479c0737d3c8958d570f9cbeba35e5715f926583fb77d0ec87c7486e1"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.046623 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-667d9dbbbc-pcbhd" event={"ID":"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9","Type":"ContainerStarted","Data":"6220354aa4ade8d0f046ca74d11c614ca92041bd84251682dd52e97d0f4995f7"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.046678 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.055532 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" event={"ID":"d2ecfd63-c654-42e9-b324-22c02d21b506","Type":"ContainerStarted","Data":"b2d41124075aed0e5d3723eb39479bb34ae77563466138e26829e292a42a163c"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.086509 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-667d9dbbbc-pcbhd" podStartSLOduration=9.086486018 podStartE2EDuration="9.086486018s" podCreationTimestamp="2026-01-21 11:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:02.082018887 +0000 UTC m=+1449.341975356" watchObservedRunningTime="2026-01-21 11:21:02.086486018 +0000 UTC m=+1449.346442487" Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.093920 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55755579c5-csgz2" event={"ID":"90253f07-2dfb-48b3-9b75-34a653836589","Type":"ContainerStarted","Data":"01ff14eb6c7415d70cc8495bf9f82913d21e21a3010c85315863fc04a400d197"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.094141 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="dnsmasq-dns" containerID="cri-o://502e6f906f1978cd73b6fd52aa270b0a25fe565d624b6874af91148a542bee58" gracePeriod=10 Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.111484 4881 generic.go:334] "Generic (PLEG): container finished" podID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerID="b14382df533ca3054b8542bddeff2d41d2f1e579142ea3b20b1a7a9c276362b8" exitCode=0 Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.111914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerDied","Data":"b14382df533ca3054b8542bddeff2d41d2f1e579142ea3b20b1a7a9c276362b8"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.114252 4881 generic.go:334] "Generic (PLEG): container finished" podID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerID="ab96b5d1c6a41e54c1b2168c0a309330a7285a8a3d539c811f7b6cd696883974" exitCode=0 Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.114360 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" event={"ID":"d2ecfd63-c654-42e9-b324-22c02d21b506","Type":"ContainerDied","Data":"ab96b5d1c6a41e54c1b2168c0a309330a7285a8a3d539c811f7b6cd696883974"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.116451 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerStarted","Data":"9af42ead045471788f06fad27bb79fcdf735280d710e2b7eaa693c5e2301f9f2"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.116497 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerStarted","Data":"7876bc29105eec2a39d493ced73df7df6c703880a81ffba5229cbe6f92400377"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.121896 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerID="20e9501e200b98586a1c9e7d12e2adf41d01903bd2505ab83e7f8f0fc5404f52" exitCode=0 Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.121970 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerDied","Data":"20e9501e200b98586a1c9e7d12e2adf41d01903bd2505ab83e7f8f0fc5404f52"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.125851 4881 generic.go:334] "Generic (PLEG): container finished" podID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerID="502e6f906f1978cd73b6fd52aa270b0a25fe565d624b6874af91148a542bee58" exitCode=0 Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.125942 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" event={"ID":"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe","Type":"ContainerDied","Data":"502e6f906f1978cd73b6fd52aa270b0a25fe565d624b6874af91148a542bee58"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.519317 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599078 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599231 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599459 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599589 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599663 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj6cp\" (UniqueName: \"kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.600198 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.600500 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.601451 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.601557 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.615486 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts" (OuterVolumeSpecName: "scripts") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.618873 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp" (OuterVolumeSpecName: "kube-api-access-bj6cp") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "kube-api-access-bj6cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.629542 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.663969 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703420 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703536 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703607 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcslt\" (UniqueName: \"kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703714 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703932 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703982 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.704557 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.704584 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj6cp\" (UniqueName: \"kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.704600 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.714897 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt" (OuterVolumeSpecName: "kube-api-access-zcslt") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "kube-api-access-zcslt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.783543 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config" (OuterVolumeSpecName: "config") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.801267 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.802080 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.806210 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.806239 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcslt\" (UniqueName: \"kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.806250 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.806266 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.807211 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data" (OuterVolumeSpecName: "config-data") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.811020 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.836668 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.863553 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.907846 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.907882 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.907892 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.907902 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.078844 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7d6f7f4cc8-c4tt4"] Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079376 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="sg-core" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079400 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="sg-core" Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079420 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-central-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079429 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-central-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079441 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="init" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079448 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="init" Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079462 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="dnsmasq-dns" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079468 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="dnsmasq-dns" Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079480 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-notification-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079488 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-notification-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079505 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="proxy-httpd" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079511 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="proxy-httpd" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079728 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="sg-core" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079760 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="dnsmasq-dns" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079800 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="proxy-httpd" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079821 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-notification-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079839 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-central-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.083127 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.087595 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.089941 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.099648 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d6f7f4cc8-c4tt4"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.158591 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" event={"ID":"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe","Type":"ContainerDied","Data":"53bbfd2a49add8edadc389aeebfde92d8828c88f0f666671d93498d8d53c2567"} Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.158653 4881 scope.go:117] "RemoveContainer" containerID="502e6f906f1978cd73b6fd52aa270b0a25fe565d624b6874af91148a542bee58" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.158824 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.184004 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerDied","Data":"254ee6473012064881c3b931949d5889b646c256080246e608ecc4945a005f58"} Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.184104 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.218718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-combined-ca-bundle\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.218870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data-custom\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.219149 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.219274 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-public-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.219314 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-logs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.219359 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6vwv\" (UniqueName: \"kubernetes.io/projected/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-kube-api-access-w6vwv\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.219400 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-internal-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.231611 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.252902 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.282212 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.307989 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.321285 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.321938 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-public-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.322060 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-logs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.322148 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6vwv\" (UniqueName: \"kubernetes.io/projected/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-kube-api-access-w6vwv\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.322230 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-internal-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.322332 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-combined-ca-bundle\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.322415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data-custom\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.325552 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-logs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.331514 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-internal-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.331988 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-public-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.332552 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data-custom\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.335352 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-combined-ca-bundle\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.337881 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.357823 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6vwv\" (UniqueName: \"kubernetes.io/projected/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-kube-api-access-w6vwv\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.366299 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.369752 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.383675 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.383835 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.400962 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424432 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424541 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424597 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424626 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424678 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmwf\" (UniqueName: \"kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424729 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424767 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.456599 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.526711 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.526890 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.526924 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.526988 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cmwf\" (UniqueName: \"kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.527054 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.527102 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.527178 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.527439 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.527459 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.532416 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.535360 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.535681 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.537743 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.556112 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cmwf\" (UniqueName: \"kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.614488 4881 scope.go:117] "RemoveContainer" containerID="fa55b39990f74afb936b29eb6ca3dc719ebcf2a4b47a29af77516eac502e8d26" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.742649 4881 scope.go:117] "RemoveContainer" containerID="7a2597fbfe970937452b64ccef79f25aaeee72972449d78e0549c998d5351134" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.812536 4881 scope.go:117] "RemoveContainer" containerID="ca18caa0fee509128e7ffae2755d6b5b1126bfe1c63366090fd0947db93d8443" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.827252 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.124160 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.125539 4881 scope.go:117] "RemoveContainer" containerID="b14382df533ca3054b8542bddeff2d41d2f1e579142ea3b20b1a7a9c276362b8" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.222961 4881 generic.go:334] "Generic (PLEG): container finished" podID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" containerID="6641f95a17dea3fe9aff6d4faf3bd17425257c19253868f2b83b7d7d759a48fd" exitCode=0 Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.223097 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wxvl" event={"ID":"65250dcf-0f0f-4fa6-8d57-e07d3d29f290","Type":"ContainerDied","Data":"6641f95a17dea3fe9aff6d4faf3bd17425257c19253868f2b83b7d7d759a48fd"} Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.250932 4881 scope.go:117] "RemoveContainer" containerID="04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.343456 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" path="/var/lib/kubelet/pods/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe/volumes" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.345694 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" path="/var/lib/kubelet/pods/bcec3c24-87bd-4c22-a800-d3835455a38b/volumes" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.384708 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d6f7f4cc8-c4tt4"] Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.529165 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.240120 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" event={"ID":"6e80f53a-8873-4c07-b738-2854d9b8b089","Type":"ContainerStarted","Data":"d8791563c1ca72988ffa5c7dd6721abff63dc81e7d6af0726e4381840048b729"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.240647 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" event={"ID":"6e80f53a-8873-4c07-b738-2854d9b8b089","Type":"ContainerStarted","Data":"3c5a26b98954f78ce7a8ff7f8fcf9dc2e852f1f67ae837fcec1bb082944e5a82"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.242980 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" event={"ID":"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d","Type":"ContainerStarted","Data":"57adf152bcc2268a1ab736b8d2425c489a664b9e1996850dcca6047b3be237f2"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.243027 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" event={"ID":"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d","Type":"ContainerStarted","Data":"36fe06b953dbd0c2746adb410072bdf8e6dc67fad566cb6d5ab0d5b768131c92"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.248304 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" event={"ID":"d2ecfd63-c654-42e9-b324-22c02d21b506","Type":"ContainerStarted","Data":"be5a6f1470e765f48f097fc450f52d809f8dde1c774ca2b5463ea172b9bb0587"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.248419 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.257208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerStarted","Data":"791785eb6fe44e62deb830a72f9b0fb2d75b8a52cfe9209138c6ef5d0b47ed74"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.258697 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.258726 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.265664 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" podStartSLOduration=3.404066624 podStartE2EDuration="6.265648462s" podCreationTimestamp="2026-01-21 11:21:00 +0000 UTC" firstStartedPulling="2026-01-21 11:21:01.881806094 +0000 UTC m=+1449.141762553" lastFinishedPulling="2026-01-21 11:21:04.743387922 +0000 UTC m=+1452.003344391" observedRunningTime="2026-01-21 11:21:06.265187751 +0000 UTC m=+1453.525144230" watchObservedRunningTime="2026-01-21 11:21:06.265648462 +0000 UTC m=+1453.525604931" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.267684 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerStarted","Data":"bc7224d9bf84f344828f19a13fb8096ac19d517cb3bb70d8fce495b5aa46625b"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.267747 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerStarted","Data":"9b7298fa3a3fcd477e8d84c1587f761e32e00a24d488249df9cca1ca349c7bc0"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.271153 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55755579c5-csgz2" event={"ID":"90253f07-2dfb-48b3-9b75-34a653836589","Type":"ContainerStarted","Data":"17dedd4e1860567e14390962a5f62dfcb62566e788a9c94218631794328be6d0"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.271312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55755579c5-csgz2" event={"ID":"90253f07-2dfb-48b3-9b75-34a653836589","Type":"ContainerStarted","Data":"5e3bbdab8b8364a2eeaa709840c0197cabd9dda1a1b1cfd6ea9d0e61abb1fc04"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.302869 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" podStartSLOduration=6.302848649 podStartE2EDuration="6.302848649s" podCreationTimestamp="2026-01-21 11:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:06.292666634 +0000 UTC m=+1453.552623103" watchObservedRunningTime="2026-01-21 11:21:06.302848649 +0000 UTC m=+1453.562805118" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.320197 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" podStartSLOduration=6.3201785600000004 podStartE2EDuration="6.32017856s" podCreationTimestamp="2026-01-21 11:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:06.31977722 +0000 UTC m=+1453.579733689" watchObservedRunningTime="2026-01-21 11:21:06.32017856 +0000 UTC m=+1453.580135029" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.345092 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-55755579c5-csgz2" podStartSLOduration=3.004303124 podStartE2EDuration="6.345074379s" podCreationTimestamp="2026-01-21 11:21:00 +0000 UTC" firstStartedPulling="2026-01-21 11:21:01.381271015 +0000 UTC m=+1448.641227484" lastFinishedPulling="2026-01-21 11:21:04.72204227 +0000 UTC m=+1451.981998739" observedRunningTime="2026-01-21 11:21:06.34109196 +0000 UTC m=+1453.601048429" watchObservedRunningTime="2026-01-21 11:21:06.345074379 +0000 UTC m=+1453.605030848" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.863036 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.934734 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6" (OuterVolumeSpecName: "kube-api-access-ltkw6") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "kube-api-access-ltkw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938437 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltkw6\" (UniqueName: \"kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938643 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938752 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938801 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938863 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938886 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.939862 4881 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.939886 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltkw6\" (UniqueName: \"kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.944028 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts" (OuterVolumeSpecName: "scripts") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.948930 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.998554 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data" (OuterVolumeSpecName: "config-data") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.020648 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.045168 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.045208 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.045218 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.045227 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.299704 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wxvl" event={"ID":"65250dcf-0f0f-4fa6-8d57-e07d3d29f290","Type":"ContainerDied","Data":"fcbe801cf2c7f3f9ce63291d49a4353e90c810cdaa5f27e1d6112dedee1eae63"} Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.300127 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcbe801cf2c7f3f9ce63291d49a4353e90c810cdaa5f27e1d6112dedee1eae63" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.299721 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.302527 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" event={"ID":"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d","Type":"ContainerStarted","Data":"81be590057d64d1af247cdbc56979bf76d7783982f1718a281d906ee494d55e6"} Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.302644 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.302684 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.306415 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerStarted","Data":"53e2fe665bdaeb7b9eb972106db909c519d01d1c08141b3cb40de82bd0536330"} Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.348095 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" podStartSLOduration=3.348072165 podStartE2EDuration="3.348072165s" podCreationTimestamp="2026-01-21 11:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:07.342031495 +0000 UTC m=+1454.601987954" watchObservedRunningTime="2026-01-21 11:21:07.348072165 +0000 UTC m=+1454.608028654" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.182665 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:08 crc kubenswrapper[4881]: E0121 11:21:08.183587 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" containerName="cinder-db-sync" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.183615 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" containerName="cinder-db-sync" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.193250 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" containerName="cinder-db-sync" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.194922 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.198495 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.199280 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9r4q7" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.199615 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.200465 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.210014 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.291662 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h42sc\" (UniqueName: \"kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.291762 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.291851 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.291940 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.292032 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.292151 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.382511 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.385028 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" containerID="cri-o://be5a6f1470e765f48f097fc450f52d809f8dde1c774ca2b5463ea172b9bb0587" gracePeriod=10 Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.385413 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerStarted","Data":"899f70ee131f6e530963ca573a67921fd95a35fbdae76709308568e8f0b66d06"} Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.431304 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.431509 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.431706 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.431755 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h42sc\" (UniqueName: \"kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.431810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.434848 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.434964 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.446141 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.497665 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.528563 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.481017 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.531018 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.549934 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.550088 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.561442 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h42sc\" (UniqueName: \"kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657653 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8kc6\" (UniqueName: \"kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657744 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657830 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657852 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657879 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657940 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761335 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8kc6\" (UniqueName: \"kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761741 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761838 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761867 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761905 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761984 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.763166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.764145 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.764750 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.765548 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.765968 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.811668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8kc6\" (UniqueName: \"kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.848416 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.981405 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.981544 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.983801 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.990311 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.992900 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.201821 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203545 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203665 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203687 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203712 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwkdn\" (UniqueName: \"kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203819 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203854 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.310902 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311009 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311059 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwkdn\" (UniqueName: \"kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311166 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311237 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311323 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311469 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311657 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.320161 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.335909 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.343649 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.346701 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.364000 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwkdn\" (UniqueName: \"kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.378321 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.452948 4881 generic.go:334] "Generic (PLEG): container finished" podID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerID="be5a6f1470e765f48f097fc450f52d809f8dde1c774ca2b5463ea172b9bb0587" exitCode=0 Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.452999 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" event={"ID":"d2ecfd63-c654-42e9-b324-22c02d21b506","Type":"ContainerDied","Data":"be5a6f1470e765f48f097fc450f52d809f8dde1c774ca2b5463ea172b9bb0587"} Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.615233 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.778365 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:10 crc kubenswrapper[4881]: I0121 11:21:10.594641 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ncbfx" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" probeResult="failure" output=< Jan 21 11:21:10 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:21:10 crc kubenswrapper[4881]: > Jan 21 11:21:10 crc kubenswrapper[4881]: I0121 11:21:10.833191 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:11 crc kubenswrapper[4881]: I0121 11:21:11.600397 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:11 crc kubenswrapper[4881]: I0121 11:21:11.862195 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:12 crc kubenswrapper[4881]: I0121 11:21:12.312066 4881 scope.go:117] "RemoveContainer" containerID="61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445" Jan 21 11:21:13 crc kubenswrapper[4881]: I0121 11:21:13.207346 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:15 crc kubenswrapper[4881]: I0121 11:21:15.124465 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 21 11:21:15 crc kubenswrapper[4881]: I0121 11:21:15.959067 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.015880 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.113972 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.176342 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.222494 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.222631 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfwwk\" (UniqueName: \"kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.222673 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.222693 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.222946 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.223068 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.270094 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk" (OuterVolumeSpecName: "kube-api-access-sfwwk") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "kube-api-access-sfwwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.272296 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.272583 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api-log" containerID="cri-o://9af42ead045471788f06fad27bb79fcdf735280d710e2b7eaa693c5e2301f9f2" gracePeriod=30 Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.272808 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api" containerID="cri-o://791785eb6fe44e62deb830a72f9b0fb2d75b8a52cfe9209138c6ef5d0b47ed74" gracePeriod=30 Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.327135 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfwwk\" (UniqueName: \"kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.356647 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.449173 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.477870 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.513112 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config" (OuterVolumeSpecName: "config") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.527482 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.528362 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.550724 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.550759 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.550771 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.583851 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.618293 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerStarted","Data":"37f117f350f4a5bb6279fc8d328dfd979286450f9c150553b8cff2ebf1ef387c"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.644217 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerStarted","Data":"5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.677006 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" event={"ID":"d2ecfd63-c654-42e9-b324-22c02d21b506","Type":"ContainerDied","Data":"b2d41124075aed0e5d3723eb39479bb34ae77563466138e26829e292a42a163c"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.677075 4881 scope.go:117] "RemoveContainer" containerID="be5a6f1470e765f48f097fc450f52d809f8dde1c774ca2b5463ea172b9bb0587" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.677355 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.704107 4881 generic.go:334] "Generic (PLEG): container finished" podID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerID="9af42ead045471788f06fad27bb79fcdf735280d710e2b7eaa693c5e2301f9f2" exitCode=143 Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.704456 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerDied","Data":"9af42ead045471788f06fad27bb79fcdf735280d710e2b7eaa693c5e2301f9f2"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.726074 4881 generic.go:334] "Generic (PLEG): container finished" podID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerID="da41cb40adea77808d3ff28a4531a5534241d5f62e3dd8c6c92475b8c399e085" exitCode=0 Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.726200 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" event={"ID":"b0326de6-1c1a-4e21-9592-ae86b46d7a3f","Type":"ContainerDied","Data":"da41cb40adea77808d3ff28a4531a5534241d5f62e3dd8c6c92475b8c399e085"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.726225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" event={"ID":"b0326de6-1c1a-4e21-9592-ae86b46d7a3f","Type":"ContainerStarted","Data":"74a53a8b6fc2a23210eccd53e198b676934ec49275b7b25077e7e841617ab615"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.758740 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.800093 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.930926 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.982109 4881 scope.go:117] "RemoveContainer" containerID="ab96b5d1c6a41e54c1b2168c0a309330a7285a8a3d539c811f7b6cd696883974" Jan 21 11:21:16 crc kubenswrapper[4881]: W0121 11:21:16.994494 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48d1a17c_f3f7_4da9_bab3_d60bf8acf261.slice/crio-c0535afd842bb7eb500134a0de7821f679adee9739e8044161792e4e82bff780 WatchSource:0}: Error finding container c0535afd842bb7eb500134a0de7821f679adee9739e8044161792e4e82bff780: Status 404 returned error can't find the container with id c0535afd842bb7eb500134a0de7821f679adee9739e8044161792e4e82bff780 Jan 21 11:21:17 crc kubenswrapper[4881]: I0121 11:21:17.327048 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" path="/var/lib/kubelet/pods/d2ecfd63-c654-42e9-b324-22c02d21b506/volumes" Jan 21 11:21:17 crc kubenswrapper[4881]: I0121 11:21:17.756128 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerStarted","Data":"c0535afd842bb7eb500134a0de7821f679adee9739e8044161792e4e82bff780"} Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.806294 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerStarted","Data":"80eb788c6d10eab27f68e4afaa093b8aa3a02ead209347f52848e0e84c80db9f"} Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.808272 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.816235 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" event={"ID":"b0326de6-1c1a-4e21-9592-ae86b46d7a3f","Type":"ContainerStarted","Data":"74a966ab9ba8420c744ac8e1932e9ad473ca91de2100fd5d2f1bf2544fd837be"} Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.816913 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.862436 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.405756021 podStartE2EDuration="14.862408218s" podCreationTimestamp="2026-01-21 11:21:04 +0000 UTC" firstStartedPulling="2026-01-21 11:21:05.531352615 +0000 UTC m=+1452.791309084" lastFinishedPulling="2026-01-21 11:21:16.988004812 +0000 UTC m=+1464.247961281" observedRunningTime="2026-01-21 11:21:18.844144394 +0000 UTC m=+1466.104100863" watchObservedRunningTime="2026-01-21 11:21:18.862408218 +0000 UTC m=+1466.122364687" Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.888858 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" podStartSLOduration=10.888835236 podStartE2EDuration="10.888835236s" podCreationTimestamp="2026-01-21 11:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:18.881134734 +0000 UTC m=+1466.141091203" watchObservedRunningTime="2026-01-21 11:21:18.888835236 +0000 UTC m=+1466.148791695" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.279965 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.498069 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.589547 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.661725 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.749887 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.837638 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.915259 4881 generic.go:334] "Generic (PLEG): container finished" podID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerID="791785eb6fe44e62deb830a72f9b0fb2d75b8a52cfe9209138c6ef5d0b47ed74" exitCode=0 Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.915344 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerDied","Data":"791785eb6fe44e62deb830a72f9b0fb2d75b8a52cfe9209138c6ef5d0b47ed74"} Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.919508 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerStarted","Data":"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96"} Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.920406 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.922953 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.026193 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.131083 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.220725 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsnsj\" (UniqueName: \"kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj\") pod \"85f05121-bd30-4b3f-936d-dc20e30fca06\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.220887 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data\") pod \"85f05121-bd30-4b3f-936d-dc20e30fca06\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.220929 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom\") pod \"85f05121-bd30-4b3f-936d-dc20e30fca06\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.221096 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs\") pod \"85f05121-bd30-4b3f-936d-dc20e30fca06\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.221130 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle\") pod \"85f05121-bd30-4b3f-936d-dc20e30fca06\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.224587 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs" (OuterVolumeSpecName: "logs") pod "85f05121-bd30-4b3f-936d-dc20e30fca06" (UID: "85f05121-bd30-4b3f-936d-dc20e30fca06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.229370 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "85f05121-bd30-4b3f-936d-dc20e30fca06" (UID: "85f05121-bd30-4b3f-936d-dc20e30fca06"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.242391 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj" (OuterVolumeSpecName: "kube-api-access-rsnsj") pod "85f05121-bd30-4b3f-936d-dc20e30fca06" (UID: "85f05121-bd30-4b3f-936d-dc20e30fca06"). InnerVolumeSpecName "kube-api-access-rsnsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.280288 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85f05121-bd30-4b3f-936d-dc20e30fca06" (UID: "85f05121-bd30-4b3f-936d-dc20e30fca06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.308731 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data" (OuterVolumeSpecName: "config-data") pod "85f05121-bd30-4b3f-936d-dc20e30fca06" (UID: "85f05121-bd30-4b3f-936d-dc20e30fca06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.323584 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.323622 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.323635 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsnsj\" (UniqueName: \"kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.323646 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.323669 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.932647 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerStarted","Data":"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf"} Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.935525 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerDied","Data":"7876bc29105eec2a39d493ced73df7df6c703880a81ffba5229cbe6f92400377"} Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.935586 4881 scope.go:117] "RemoveContainer" containerID="791785eb6fe44e62deb830a72f9b0fb2d75b8a52cfe9209138c6ef5d0b47ed74" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.935638 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.945209 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerStarted","Data":"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9"} Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.945244 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api-log" containerID="cri-o://fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" gracePeriod=30 Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.945353 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api" containerID="cri-o://dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" gracePeriod=30 Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.945647 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.946842 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ncbfx" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" containerID="cri-o://bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7" gracePeriod=2 Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.962813 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.972186 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=12.972167001 podStartE2EDuration="12.972167001s" podCreationTimestamp="2026-01-21 11:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:20.971199977 +0000 UTC m=+1468.231156456" watchObservedRunningTime="2026-01-21 11:21:20.972167001 +0000 UTC m=+1468.232123470" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.019631 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.030942 4881 scope.go:117] "RemoveContainer" containerID="9af42ead045471788f06fad27bb79fcdf735280d710e2b7eaa693c5e2301f9f2" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.077763 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.116547 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.325621 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" path="/var/lib/kubelet/pods/85f05121-bd30-4b3f-936d-dc20e30fca06/volumes" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.376352 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.635483 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.760602 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79rxx\" (UniqueName: \"kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx\") pod \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.761005 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content\") pod \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.761047 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities\") pod \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.764157 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities" (OuterVolumeSpecName: "utilities") pod "6a8083e9-c68d-40ca-bde9-b84e43b65ab8" (UID: "6a8083e9-c68d-40ca-bde9-b84e43b65ab8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.770093 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx" (OuterVolumeSpecName: "kube-api-access-79rxx") pod "6a8083e9-c68d-40ca-bde9-b84e43b65ab8" (UID: "6a8083e9-c68d-40ca-bde9-b84e43b65ab8"). InnerVolumeSpecName "kube-api-access-79rxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.852950 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.866635 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79rxx\" (UniqueName: \"kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.866693 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.901613 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a8083e9-c68d-40ca-bde9-b84e43b65ab8" (UID: "6a8083e9-c68d-40ca-bde9-b84e43b65ab8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.963775 4881 generic.go:334] "Generic (PLEG): container finished" podID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerID="bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7" exitCode=0 Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.963860 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerDied","Data":"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.963888 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerDied","Data":"a06c31c201ce60f211d95724861d78b4cdd096d87a4ed5b0a3ede7c018cd2b3c"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.963906 4881 scope.go:117] "RemoveContainer" containerID="bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.964013 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969253 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwkdn\" (UniqueName: \"kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969353 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969395 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969465 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969521 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969586 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969631 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.970148 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.974871 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.974961 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts" (OuterVolumeSpecName: "scripts") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.979079 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.982125 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerStarted","Data":"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.987249 4881 generic.go:334] "Generic (PLEG): container finished" podID="349e8898-8b7c-414a-8357-d431c8b81bf4" containerID="c648692c811ad6f54f474e55240cf83d10bccce020989330faa953f52c62836c" exitCode=0 Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.987340 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxb97" event={"ID":"349e8898-8b7c-414a-8357-d431c8b81bf4","Type":"ContainerDied","Data":"c648692c811ad6f54f474e55240cf83d10bccce020989330faa953f52c62836c"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.990051 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs" (OuterVolumeSpecName: "logs") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.993453 4881 generic.go:334] "Generic (PLEG): container finished" podID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerID="dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" exitCode=0 Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.993512 4881 generic.go:334] "Generic (PLEG): container finished" podID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerID="fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" exitCode=143 Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.993923 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerDied","Data":"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.993965 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.993973 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerDied","Data":"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.994090 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerDied","Data":"c0535afd842bb7eb500134a0de7821f679adee9739e8044161792e4e82bff780"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.994178 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn" (OuterVolumeSpecName: "kube-api-access-wwkdn") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "kube-api-access-wwkdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.011707 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.013933 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=11.330477309 podStartE2EDuration="14.013912492s" podCreationTimestamp="2026-01-21 11:21:08 +0000 UTC" firstStartedPulling="2026-01-21 11:21:15.860681272 +0000 UTC m=+1463.120637741" lastFinishedPulling="2026-01-21 11:21:18.544116455 +0000 UTC m=+1465.804072924" observedRunningTime="2026-01-21 11:21:22.008166778 +0000 UTC m=+1469.268123257" watchObservedRunningTime="2026-01-21 11:21:22.013912492 +0000 UTC m=+1469.273868961" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.048317 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data" (OuterVolumeSpecName: "config-data") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072524 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwkdn\" (UniqueName: \"kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072565 4881 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072575 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072585 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072595 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072603 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072612 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.146872 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.152149 4881 scope.go:117] "RemoveContainer" containerID="c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.156174 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.181440 4881 scope.go:117] "RemoveContainer" containerID="932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.227319 4881 scope.go:117] "RemoveContainer" containerID="bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.230408 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7\": container with ID starting with bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7 not found: ID does not exist" containerID="bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.230460 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7"} err="failed to get container status \"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7\": rpc error: code = NotFound desc = could not find container \"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7\": container with ID starting with bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7 not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.230497 4881 scope.go:117] "RemoveContainer" containerID="c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.233973 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af\": container with ID starting with c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af not found: ID does not exist" containerID="c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.234021 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af"} err="failed to get container status \"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af\": rpc error: code = NotFound desc = could not find container \"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af\": container with ID starting with c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.234049 4881 scope.go:117] "RemoveContainer" containerID="932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.243964 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe\": container with ID starting with 932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe not found: ID does not exist" containerID="932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.244020 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe"} err="failed to get container status \"932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe\": rpc error: code = NotFound desc = could not find container \"932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe\": container with ID starting with 932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.244058 4881 scope.go:117] "RemoveContainer" containerID="dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.333347 4881 scope.go:117] "RemoveContainer" containerID="fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.390089 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.394376 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.399031 4881 scope.go:117] "RemoveContainer" containerID="dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.401528 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9\": container with ID starting with dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9 not found: ID does not exist" containerID="dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.401581 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9"} err="failed to get container status \"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9\": rpc error: code = NotFound desc = could not find container \"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9\": container with ID starting with dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9 not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.401613 4881 scope.go:117] "RemoveContainer" containerID="fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.408636 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96\": container with ID starting with fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96 not found: ID does not exist" containerID="fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.408705 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96"} err="failed to get container status \"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96\": rpc error: code = NotFound desc = could not find container \"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96\": container with ID starting with fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96 not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.408745 4881 scope.go:117] "RemoveContainer" containerID="dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.412901 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9"} err="failed to get container status \"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9\": rpc error: code = NotFound desc = could not find container \"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9\": container with ID starting with dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9 not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.412953 4881 scope.go:117] "RemoveContainer" containerID="fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.419642 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96"} err="failed to get container status \"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96\": rpc error: code = NotFound desc = could not find container \"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96\": container with ID starting with fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96 not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.426123 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430037 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430090 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430136 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="init" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430147 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="init" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430186 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430201 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430251 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="extract-utilities" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430261 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="extract-utilities" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430278 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430287 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430309 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430318 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430330 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430339 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430354 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430362 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430380 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="extract-content" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430391 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="extract-content" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430859 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430884 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430902 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430926 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430944 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430962 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.439773 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.440025 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.443469 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.443642 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.443760 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.483637 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-scripts\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.483829 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae53e440-5bd5-41e3-8339-57eebaef03d2-logs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.483912 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae53e440-5bd5-41e3-8339-57eebaef03d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.483981 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.484079 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.484179 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbk4d\" (UniqueName: \"kubernetes.io/projected/ae53e440-5bd5-41e3-8339-57eebaef03d2-kube-api-access-rbk4d\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.484311 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.484475 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.484563 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586291 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbk4d\" (UniqueName: \"kubernetes.io/projected/ae53e440-5bd5-41e3-8339-57eebaef03d2-kube-api-access-rbk4d\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586443 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586491 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586540 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-scripts\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586577 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae53e440-5bd5-41e3-8339-57eebaef03d2-logs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586765 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae53e440-5bd5-41e3-8339-57eebaef03d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.587106 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae53e440-5bd5-41e3-8339-57eebaef03d2-logs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.587288 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae53e440-5bd5-41e3-8339-57eebaef03d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.587349 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.587425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.593054 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.593198 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.595223 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.602005 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-scripts\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.602633 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.603234 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.607483 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbk4d\" (UniqueName: \"kubernetes.io/projected/ae53e440-5bd5-41e3-8339-57eebaef03d2-kube-api-access-rbk4d\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.767354 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.037390 4881 generic.go:334] "Generic (PLEG): container finished" podID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerID="5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0" exitCode=1 Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.037622 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerDied","Data":"5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0"} Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.038177 4881 scope.go:117] "RemoveContainer" containerID="61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.039050 4881 scope.go:117] "RemoveContainer" containerID="5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0" Jan 21 11:21:23 crc kubenswrapper[4881]: E0121 11:21:23.039340 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ee4e7116-c2cd-43d5-af6b-9f30b5053e0e)\"" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.264881 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:23 crc kubenswrapper[4881]: W0121 11:21:23.283939 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae53e440_5bd5_41e3_8339_57eebaef03d2.slice/crio-c2c4191f74bf553a8a2dca661f23628aae4dc5fb419e29786f6ea024fe83ab3c WatchSource:0}: Error finding container c2c4191f74bf553a8a2dca661f23628aae4dc5fb419e29786f6ea024fe83ab3c: Status 404 returned error can't find the container with id c2c4191f74bf553a8a2dca661f23628aae4dc5fb419e29786f6ea024fe83ab3c Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.329669 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" path="/var/lib/kubelet/pods/48d1a17c-f3f7-4da9-bab3-d60bf8acf261/volumes" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.330933 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" path="/var/lib/kubelet/pods/6a8083e9-c68d-40ca-bde9-b84e43b65ab8/volumes" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.610665 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxb97" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.713668 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data\") pod \"349e8898-8b7c-414a-8357-d431c8b81bf4\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.713795 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle\") pod \"349e8898-8b7c-414a-8357-d431c8b81bf4\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.713821 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvn9r\" (UniqueName: \"kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r\") pod \"349e8898-8b7c-414a-8357-d431c8b81bf4\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.713843 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data\") pod \"349e8898-8b7c-414a-8357-d431c8b81bf4\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.720923 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "349e8898-8b7c-414a-8357-d431c8b81bf4" (UID: "349e8898-8b7c-414a-8357-d431c8b81bf4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.727099 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r" (OuterVolumeSpecName: "kube-api-access-gvn9r") pod "349e8898-8b7c-414a-8357-d431c8b81bf4" (UID: "349e8898-8b7c-414a-8357-d431c8b81bf4"). InnerVolumeSpecName "kube-api-access-gvn9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.764747 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "349e8898-8b7c-414a-8357-d431c8b81bf4" (UID: "349e8898-8b7c-414a-8357-d431c8b81bf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.784767 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.807049 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data" (OuterVolumeSpecName: "config-data") pod "349e8898-8b7c-414a-8357-d431c8b81bf4" (UID: "349e8898-8b7c-414a-8357-d431c8b81bf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.823972 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.830603 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvn9r\" (UniqueName: \"kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.831011 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.831096 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.849554 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.862139 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.862383 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-796dd99876-gb7nt" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-api" containerID="cri-o://3a9e17862c5ff2f64ddcb7cb3eb9d73424fbbcd62c695e9a6f00fe4f1a20f86b" gracePeriod=30 Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.862461 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-796dd99876-gb7nt" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-httpd" containerID="cri-o://d69bb72f9eba472479b5b854a392dd678dcf12a1e5ab100dffbf954eda114573" gracePeriod=30 Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.988043 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.087860 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.092192 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="dnsmasq-dns" containerID="cri-o://3c2fbfa61210bf849e04651287e22b6c198d4c12ea96a2312edd5e9f291c7879" gracePeriod=10 Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.132475 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae53e440-5bd5-41e3-8339-57eebaef03d2","Type":"ContainerStarted","Data":"c2c4191f74bf553a8a2dca661f23628aae4dc5fb419e29786f6ea024fe83ab3c"} Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.153920 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxb97" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.157178 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxb97" event={"ID":"349e8898-8b7c-414a-8357-d431c8b81bf4","Type":"ContainerDied","Data":"cd824796b06380fe0748d0a1334aa26a3fd0a19fab70225e560d35cfb754e2b4"} Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.157224 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd824796b06380fe0748d0a1334aa26a3fd0a19fab70225e560d35cfb754e2b4" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.545729 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:21:24 crc kubenswrapper[4881]: E0121 11:21:24.546539 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" containerName="glance-db-sync" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.546551 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" containerName="glance-db-sync" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.546725 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" containerName="glance-db-sync" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.551306 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569443 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8b7\" (UniqueName: \"kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569495 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569517 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569681 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569703 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569748 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.586412 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd8b7\" (UniqueName: \"kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673520 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673545 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673661 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673737 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.674715 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.675383 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.675380 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.676238 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.684209 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.710593 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd8b7\" (UniqueName: \"kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.722876 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.724283 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.731111 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.731297 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.737326 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-hk8hq" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.768091 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.783430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6sr9\" (UniqueName: \"kubernetes.io/projected/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-kube-api-access-m6sr9\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.783528 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config-secret\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.783889 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.783989 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.884981 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.885052 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.885154 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6sr9\" (UniqueName: \"kubernetes.io/projected/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-kube-api-access-m6sr9\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.885199 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config-secret\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.886822 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.893570 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config-secret\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.897474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.911451 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.931193 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6sr9\" (UniqueName: \"kubernetes.io/projected/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-kube-api-access-m6sr9\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.129732 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.130120 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.256377 4881 generic.go:334] "Generic (PLEG): container finished" podID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerID="3c2fbfa61210bf849e04651287e22b6c198d4c12ea96a2312edd5e9f291c7879" exitCode=0 Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.256697 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" event={"ID":"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f","Type":"ContainerDied","Data":"3c2fbfa61210bf849e04651287e22b6c198d4c12ea96a2312edd5e9f291c7879"} Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.267841 4881 generic.go:334] "Generic (PLEG): container finished" podID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerID="d69bb72f9eba472479b5b854a392dd678dcf12a1e5ab100dffbf954eda114573" exitCode=0 Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.267927 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerDied","Data":"d69bb72f9eba472479b5b854a392dd678dcf12a1e5ab100dffbf954eda114573"} Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.275240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae53e440-5bd5-41e3-8339-57eebaef03d2","Type":"ContainerStarted","Data":"9855bd2a68e38d3c6ab91049f119372b94a26d3db8127fad0eb05eb3d93712a7"} Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.280924 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.309845 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.411411 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:25 crc kubenswrapper[4881]: E0121 11:21:25.412653 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="dnsmasq-dns" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.412676 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="dnsmasq-dns" Jan 21 11:21:25 crc kubenswrapper[4881]: E0121 11:21:25.412714 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="init" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.412727 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="init" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.412953 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="dnsmasq-dns" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.414421 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.414772 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gpsz\" (UniqueName: \"kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.414907 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.415125 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.415161 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.415199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.415280 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.422317 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.422877 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.423185 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.423316 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8snw" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.436745 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz" (OuterVolumeSpecName: "kube-api-access-9gpsz") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "kube-api-access-9gpsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520151 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520273 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520289 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520308 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520356 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520418 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7p9c\" (UniqueName: \"kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520537 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gpsz\" (UniqueName: \"kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.527013 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.535917 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.555628 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config" (OuterVolumeSpecName: "config") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.574355 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.588993 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623177 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7p9c\" (UniqueName: \"kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623336 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623374 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623412 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623430 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623446 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623509 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623519 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623528 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623538 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623547 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.624086 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.624918 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.625220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.631126 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.634192 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.640939 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.656453 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.674686 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.694448 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7p9c\" (UniqueName: \"kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: W0121 11:21:25.725422 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a89a9d0_4859_41cb_896d_f1a91e854d7b.slice/crio-7d5f5a0fecb347a3031d8e9d038b27129aa5ce2b2e49dd11bb8a2bb4f461cdbf WatchSource:0}: Error finding container 7d5f5a0fecb347a3031d8e9d038b27129aa5ce2b2e49dd11bb8a2bb4f461cdbf: Status 404 returned error can't find the container with id 7d5f5a0fecb347a3031d8e9d038b27129aa5ce2b2e49dd11bb8a2bb4f461cdbf Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.756848 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.000340 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.002702 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.006770 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.026478 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.048924 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.142365 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.143805 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.143958 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.144092 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.144206 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.144450 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c96n8\" (UniqueName: \"kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.144559 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.253440 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.253629 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.253724 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.253824 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.253996 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c96n8\" (UniqueName: \"kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.254108 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.254267 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.254687 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.255044 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.261184 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.268903 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.283755 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.284568 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.294265 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c96n8\" (UniqueName: \"kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.322570 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b0b6ce2c-5ae8-496f-9374-d3069bc3d149","Type":"ContainerStarted","Data":"ac59164ee2feec470301d1408d5d445d2eb400ca2673ab9a5db218be6b952cfd"} Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.331953 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.347518 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" event={"ID":"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f","Type":"ContainerDied","Data":"89b83a73d98285f1ad5dfbcb846ef4a7cc6a0027b6f7fbb5d7b8bc7a7b615ee8"} Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.347625 4881 scope.go:117] "RemoveContainer" containerID="3c2fbfa61210bf849e04651287e22b6c198d4c12ea96a2312edd5e9f291c7879" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.348032 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.361124 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c849cf559-fjllv" event={"ID":"4a89a9d0-4859-41cb-896d-f1a91e854d7b","Type":"ContainerStarted","Data":"7d5f5a0fecb347a3031d8e9d038b27129aa5ce2b2e49dd11bb8a2bb4f461cdbf"} Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.430669 4881 scope.go:117] "RemoveContainer" containerID="ab477504b6174b1df2cba532dc993abe653a33a827965c0d26c8c5abcd35974f" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.495263 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.520756 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.544322 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.555887 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:26 crc kubenswrapper[4881]: W0121 11:21:26.577277 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8ac2a63_dc28_4695_a77c_e82af400f4b9.slice/crio-878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b WatchSource:0}: Error finding container 878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b: Status 404 returned error can't find the container with id 878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.196522 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:27 crc kubenswrapper[4881]: W0121 11:21:27.243577 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6314462_e91a_47e2_8c76_27d6045e4fd5.slice/crio-1b45bab75ec786490c31073f33d23492c5ef48b13f2754d5543dd412a6220954 WatchSource:0}: Error finding container 1b45bab75ec786490c31073f33d23492c5ef48b13f2754d5543dd412a6220954: Status 404 returned error can't find the container with id 1b45bab75ec786490c31073f33d23492c5ef48b13f2754d5543dd412a6220954 Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.334369 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" path="/var/lib/kubelet/pods/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f/volumes" Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.389436 4881 generic.go:334] "Generic (PLEG): container finished" podID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerID="e80fa73fd255dd2a9302a2ee6b75f7b4cf8767d543328dc915247c69166c0c25" exitCode=0 Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.389539 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c849cf559-fjllv" event={"ID":"4a89a9d0-4859-41cb-896d-f1a91e854d7b","Type":"ContainerDied","Data":"e80fa73fd255dd2a9302a2ee6b75f7b4cf8767d543328dc915247c69166c0c25"} Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.406335 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerStarted","Data":"1b45bab75ec786490c31073f33d23492c5ef48b13f2754d5543dd412a6220954"} Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.430486 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae53e440-5bd5-41e3-8339-57eebaef03d2","Type":"ContainerStarted","Data":"8ac2bfc0ffb0d46d00cee4b790d5413d7436c14a608c0d6d0e310a86377c6f2b"} Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.431724 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.435976 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerStarted","Data":"878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b"} Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.466672 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.466640165 podStartE2EDuration="5.466640165s" podCreationTimestamp="2026-01-21 11:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:27.455632181 +0000 UTC m=+1474.715588650" watchObservedRunningTime="2026-01-21 11:21:27.466640165 +0000 UTC m=+1474.726596634" Jan 21 11:21:28 crc kubenswrapper[4881]: I0121 11:21:28.485739 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c849cf559-fjllv" event={"ID":"4a89a9d0-4859-41cb-896d-f1a91e854d7b","Type":"ContainerStarted","Data":"520ec1cfcb7fa94d0057499475a0936b202225668f29de849ba69f710c127ead"} Jan 21 11:21:28 crc kubenswrapper[4881]: I0121 11:21:28.493054 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:28 crc kubenswrapper[4881]: I0121 11:21:28.507406 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerStarted","Data":"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb"} Jan 21 11:21:28 crc kubenswrapper[4881]: I0121 11:21:28.525045 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerStarted","Data":"243391ce37046a98efbd843bc1e6f28fda173bffe3ce05b733b63f613224e766"} Jan 21 11:21:28 crc kubenswrapper[4881]: I0121 11:21:28.528554 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c849cf559-fjllv" podStartSLOduration=4.528528266 podStartE2EDuration="4.528528266s" podCreationTimestamp="2026-01-21 11:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:28.519767218 +0000 UTC m=+1475.779723697" watchObservedRunningTime="2026-01-21 11:21:28.528528266 +0000 UTC m=+1475.788484735" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.146437 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.224312 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.497882 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.498241 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.498253 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.499185 4881 scope.go:117] "RemoveContainer" containerID="5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0" Jan 21 11:21:29 crc kubenswrapper[4881]: E0121 11:21:29.499599 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ee4e7116-c2cd-43d5-af6b-9f30b5053e0e)\"" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.546776 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.551394 4881 generic.go:334] "Generic (PLEG): container finished" podID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerID="3a9e17862c5ff2f64ddcb7cb3eb9d73424fbbcd62c695e9a6f00fe4f1a20f86b" exitCode=0 Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.551527 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerDied","Data":"3a9e17862c5ff2f64ddcb7cb3eb9d73424fbbcd62c695e9a6f00fe4f1a20f86b"} Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.555676 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerStarted","Data":"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2"} Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.563526 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerStarted","Data":"c7d5411076516ac1067feb6fa2326814efce9d04ded39d593fa3f53c461d73dc"} Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.563688 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="cinder-scheduler" containerID="cri-o://f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf" gracePeriod=30 Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.563985 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="probe" containerID="cri-o://d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938" gracePeriod=30 Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.595927 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.595902334 podStartE2EDuration="5.595902334s" podCreationTimestamp="2026-01-21 11:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:29.581478245 +0000 UTC m=+1476.841434724" watchObservedRunningTime="2026-01-21 11:21:29.595902334 +0000 UTC m=+1476.855858803" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.604375 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.604355345 podStartE2EDuration="5.604355345s" podCreationTimestamp="2026-01-21 11:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:29.603645817 +0000 UTC m=+1476.863602296" watchObservedRunningTime="2026-01-21 11:21:29.604355345 +0000 UTC m=+1476.864311814" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.654639 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.293950 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.461593 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config\") pod \"f51f915e-f553-4130-a16b-9e6af68a5a15\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.461710 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle\") pod \"f51f915e-f553-4130-a16b-9e6af68a5a15\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.461841 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config\") pod \"f51f915e-f553-4130-a16b-9e6af68a5a15\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.461909 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgwv9\" (UniqueName: \"kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9\") pod \"f51f915e-f553-4130-a16b-9e6af68a5a15\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.461943 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs\") pod \"f51f915e-f553-4130-a16b-9e6af68a5a15\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.478426 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9" (OuterVolumeSpecName: "kube-api-access-lgwv9") pod "f51f915e-f553-4130-a16b-9e6af68a5a15" (UID: "f51f915e-f553-4130-a16b-9e6af68a5a15"). InnerVolumeSpecName "kube-api-access-lgwv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.493041 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f51f915e-f553-4130-a16b-9e6af68a5a15" (UID: "f51f915e-f553-4130-a16b-9e6af68a5a15"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.529692 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f51f915e-f553-4130-a16b-9e6af68a5a15" (UID: "f51f915e-f553-4130-a16b-9e6af68a5a15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.536366 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config" (OuterVolumeSpecName: "config") pod "f51f915e-f553-4130-a16b-9e6af68a5a15" (UID: "f51f915e-f553-4130-a16b-9e6af68a5a15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.564504 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.564556 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgwv9\" (UniqueName: \"kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.564573 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.564586 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.575991 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f51f915e-f553-4130-a16b-9e6af68a5a15" (UID: "f51f915e-f553-4130-a16b-9e6af68a5a15"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.582608 4881 generic.go:334] "Generic (PLEG): container finished" podID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerID="d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938" exitCode=0 Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.582677 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerDied","Data":"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938"} Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.585451 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerDied","Data":"2e4be17fa483a6184f2eda034f9fc33ec23230c3292d5bb3f6f80cd50bfff6e9"} Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.585494 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.585524 4881 scope.go:117] "RemoveContainer" containerID="d69bb72f9eba472479b5b854a392dd678dcf12a1e5ab100dffbf954eda114573" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.585953 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-log" containerID="cri-o://243391ce37046a98efbd843bc1e6f28fda173bffe3ce05b733b63f613224e766" gracePeriod=30 Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.586293 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-httpd" containerID="cri-o://c7d5411076516ac1067feb6fa2326814efce9d04ded39d593fa3f53c461d73dc" gracePeriod=30 Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.669494 4881 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.693631 4881 scope.go:117] "RemoveContainer" containerID="3a9e17862c5ff2f64ddcb7cb3eb9d73424fbbcd62c695e9a6f00fe4f1a20f86b" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.700099 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.709237 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.326864 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" path="/var/lib/kubelet/pods/f51f915e-f553-4130-a16b-9e6af68a5a15/volumes" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.619247 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerID="c37cb0dabfc7bd198de45353bd7d592c9381160bf0f186350e93353fe2ea4470" exitCode=137 Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.619358 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerDied","Data":"c37cb0dabfc7bd198de45353bd7d592c9381160bf0f186350e93353fe2ea4470"} Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640113 4881 generic.go:334] "Generic (PLEG): container finished" podID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerID="c7d5411076516ac1067feb6fa2326814efce9d04ded39d593fa3f53c461d73dc" exitCode=0 Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640158 4881 generic.go:334] "Generic (PLEG): container finished" podID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerID="243391ce37046a98efbd843bc1e6f28fda173bffe3ce05b733b63f613224e766" exitCode=143 Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640400 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-log" containerID="cri-o://434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" gracePeriod=30 Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640603 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerDied","Data":"c7d5411076516ac1067feb6fa2326814efce9d04ded39d593fa3f53c461d73dc"} Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640673 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerDied","Data":"243391ce37046a98efbd843bc1e6f28fda173bffe3ce05b733b63f613224e766"} Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640684 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerDied","Data":"878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b"} Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640701 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640722 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-httpd" containerID="cri-o://ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" gracePeriod=30 Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.657349 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.825229 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.825794 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs" (OuterVolumeSpecName: "logs") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.825957 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.826175 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.826314 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.827068 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.827187 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.827229 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7p9c\" (UniqueName: \"kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.827416 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.828309 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.828329 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.837425 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c" (OuterVolumeSpecName: "kube-api-access-m7p9c") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "kube-api-access-m7p9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.837963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts" (OuterVolumeSpecName: "scripts") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.838140 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.867338 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.897030 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data" (OuterVolumeSpecName: "config-data") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.934924 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.934964 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.934976 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.934988 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7p9c\" (UniqueName: \"kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.935019 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.974991 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.979531 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.037336 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.140491 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.140552 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.140838 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.141017 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.141096 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lfrt\" (UniqueName: \"kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.141225 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.141314 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.142470 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs" (OuterVolumeSpecName: "logs") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.148925 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.149138 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt" (OuterVolumeSpecName: "kube-api-access-2lfrt") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "kube-api-access-2lfrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.175638 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts" (OuterVolumeSpecName: "scripts") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.179971 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data" (OuterVolumeSpecName: "config-data") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.201296 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.205556 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249472 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249511 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lfrt\" (UniqueName: \"kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249524 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249533 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249542 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249551 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249558 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.374853 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.454162 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.454911 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.454999 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.455059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.455227 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.455292 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c96n8\" (UniqueName: \"kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.455372 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.456993 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs" (OuterVolumeSpecName: "logs") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.457207 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.460985 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8" (OuterVolumeSpecName: "kube-api-access-c96n8") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "kube-api-access-c96n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.464224 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.467197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts" (OuterVolumeSpecName: "scripts") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.487775 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.544013 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data" (OuterVolumeSpecName: "config-data") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558115 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558162 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558211 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558233 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c96n8\" (UniqueName: \"kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558248 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558258 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558298 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.588093 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.654236 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerDied","Data":"1c1c6837f2242fbd603bbb32074adc55de9c3121097b94c5088bc30db69ba787"} Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.654333 4881 scope.go:117] "RemoveContainer" containerID="20e9501e200b98586a1c9e7d12e2adf41d01903bd2505ab83e7f8f0fc5404f52" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.654268 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.659247 4881 generic.go:334] "Generic (PLEG): container finished" podID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerID="ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" exitCode=0 Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.659284 4881 generic.go:334] "Generic (PLEG): container finished" podID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerID="434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" exitCode=143 Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.659357 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.659922 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.660044 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerDied","Data":"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2"} Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.660072 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerDied","Data":"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb"} Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.660086 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerDied","Data":"1b45bab75ec786490c31073f33d23492c5ef48b13f2754d5543dd412a6220954"} Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.660898 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.736391 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.773873 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.824297 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.848644 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.862270 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.878838 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879447 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879470 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon-log" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879487 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-api" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879495 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-api" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879504 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879512 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879531 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879538 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879549 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879556 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879571 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879577 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879606 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879614 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879632 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879639 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879902 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879922 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879931 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-api" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879949 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879961 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879974 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879981 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879994 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.881122 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.883364 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.883969 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.884033 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8snw" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.884136 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.887103 4881 scope.go:117] "RemoveContainer" containerID="c37cb0dabfc7bd198de45353bd7d592c9381160bf0f186350e93353fe2ea4470" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.888064 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.901216 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.920092 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.925195 4881 scope.go:117] "RemoveContainer" containerID="ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.940902 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.941629 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.946132 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.946322 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969196 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969255 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969312 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6fqw\" (UniqueName: \"kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969345 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969365 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969400 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969425 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969456 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.982703 4881 scope.go:117] "RemoveContainer" containerID="434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" Jan 21 11:21:33 crc kubenswrapper[4881]: E0121 11:21:33.002855 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f516fb6_322b_4eee_9d4d_a10176959bbb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8ac2a63_dc28_4695_a77c_e82af400f4b9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6314462_e91a_47e2_8c76_27d6045e4fd5.slice\": RecentStats: unable to find data in memory cache]" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.060615 4881 scope.go:117] "RemoveContainer" containerID="ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" Jan 21 11:21:33 crc kubenswrapper[4881]: E0121 11:21:33.064507 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2\": container with ID starting with ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2 not found: ID does not exist" containerID="ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.064547 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2"} err="failed to get container status \"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2\": rpc error: code = NotFound desc = could not find container \"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2\": container with ID starting with ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2 not found: ID does not exist" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.064572 4881 scope.go:117] "RemoveContainer" containerID="434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" Jan 21 11:21:33 crc kubenswrapper[4881]: E0121 11:21:33.065190 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb\": container with ID starting with 434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb not found: ID does not exist" containerID="434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.065224 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb"} err="failed to get container status \"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb\": rpc error: code = NotFound desc = could not find container \"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb\": container with ID starting with 434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb not found: ID does not exist" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.065266 4881 scope.go:117] "RemoveContainer" containerID="ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.065844 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2"} err="failed to get container status \"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2\": rpc error: code = NotFound desc = could not find container \"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2\": container with ID starting with ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2 not found: ID does not exist" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.065872 4881 scope.go:117] "RemoveContainer" containerID="434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.066368 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb"} err="failed to get container status \"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb\": rpc error: code = NotFound desc = could not find container \"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb\": container with ID starting with 434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb not found: ID does not exist" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.071840 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.071919 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6fqw\" (UniqueName: \"kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.071966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.071986 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072009 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072045 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072071 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072088 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072109 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072138 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072157 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh585\" (UniqueName: \"kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072197 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072264 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072292 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072311 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072334 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072626 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.073560 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.073654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.078602 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.081076 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.090331 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.091079 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.093577 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6fqw\" (UniqueName: \"kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.122055 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174311 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174629 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh585\" (UniqueName: \"kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174656 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174739 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174756 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174809 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174847 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174893 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.175231 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.175425 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.175470 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.183966 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.185289 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.194642 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.194761 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.198636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh585\" (UniqueName: \"kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.208957 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.238280 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.301438 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.329208 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" path="/var/lib/kubelet/pods/2f516fb6-322b-4eee-9d4d-a10176959bbb/volumes" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.330097 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" path="/var/lib/kubelet/pods/b6314462-e91a-47e2-8c76-27d6045e4fd5/volumes" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.331240 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" path="/var/lib/kubelet/pods/b8ac2a63-dc28-4695-a77c-e82af400f4b9/volumes" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.939567 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:34 crc kubenswrapper[4881]: I0121 11:21:34.108768 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:34 crc kubenswrapper[4881]: W0121 11:21:34.130656 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a22f004_7d84_4edc_86f7_d58adb131a45.slice/crio-c118bf221673b7075db16b12d92f917f44d316d1edbfb63816381a8a7fe9bfa7 WatchSource:0}: Error finding container c118bf221673b7075db16b12d92f917f44d316d1edbfb63816381a8a7fe9bfa7: Status 404 returned error can't find the container with id c118bf221673b7075db16b12d92f917f44d316d1edbfb63816381a8a7fe9bfa7 Jan 21 11:21:34 crc kubenswrapper[4881]: I0121 11:21:34.696143 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerStarted","Data":"c118bf221673b7075db16b12d92f917f44d316d1edbfb63816381a8a7fe9bfa7"} Jan 21 11:21:34 crc kubenswrapper[4881]: I0121 11:21:34.698002 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerStarted","Data":"d67de62ed844d45b06b45329375dde0d59a63d15e298263c3618894b7576c1ba"} Jan 21 11:21:34 crc kubenswrapper[4881]: I0121 11:21:34.852105 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 11:21:34 crc kubenswrapper[4881]: I0121 11:21:34.912924 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.045091 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.045327 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="dnsmasq-dns" containerID="cri-o://74a966ab9ba8420c744ac8e1932e9ad473ca91de2100fd5d2f1bf2544fd837be" gracePeriod=10 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.362387 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7564f958f5-jmdx2"] Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.366424 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7564f958f5-jmdx2"] Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.380611 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.388244 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.388617 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.389072 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.425900 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56bl5\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-kube-api-access-56bl5\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426008 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-config-data\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426084 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-combined-ca-bundle\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426189 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-run-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426210 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-internal-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426232 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-etc-swift\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-log-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-public-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.460821 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.527911 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528009 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528131 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h42sc\" (UniqueName: \"kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528170 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528201 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528582 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-public-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528681 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bl5\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-kube-api-access-56bl5\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528713 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-config-data\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528744 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-combined-ca-bundle\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528780 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-run-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528870 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-internal-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528886 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-etc-swift\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528903 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-log-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.529968 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-log-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.530361 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-run-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.534651 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.537873 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-etc-swift\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.538465 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.552889 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc" (OuterVolumeSpecName: "kube-api-access-h42sc") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "kube-api-access-h42sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.553275 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-combined-ca-bundle\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.553400 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-internal-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.587197 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-config-data\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.587250 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-public-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.591441 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bl5\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-kube-api-access-56bl5\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.600355 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts" (OuterVolumeSpecName: "scripts") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.606466 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.635070 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.635097 4881 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.635108 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h42sc\" (UniqueName: \"kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.635119 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.662545 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.733323 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data" (OuterVolumeSpecName: "config-data") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.737219 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.737373 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.760622 4881 generic.go:334] "Generic (PLEG): container finished" podID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerID="74a966ab9ba8420c744ac8e1932e9ad473ca91de2100fd5d2f1bf2544fd837be" exitCode=0 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.760731 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" event={"ID":"b0326de6-1c1a-4e21-9592-ae86b46d7a3f","Type":"ContainerDied","Data":"74a966ab9ba8420c744ac8e1932e9ad473ca91de2100fd5d2f1bf2544fd837be"} Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.778082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerStarted","Data":"9286d3d52dfda503e9a39d6bc904388c1d8fb7d48591cc6a081eaedbcac3451b"} Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.783883 4881 generic.go:334] "Generic (PLEG): container finished" podID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerID="f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf" exitCode=0 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.783943 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerDied","Data":"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf"} Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.783967 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerDied","Data":"37f117f350f4a5bb6279fc8d328dfd979286450f9c150553b8cff2ebf1ef387c"} Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.783984 4881 scope.go:117] "RemoveContainer" containerID="d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.784078 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.801181 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerStarted","Data":"f3bc5d7bc188f1c4ac565e1d75e559e4a8e17c15c9ed4b157de750543aaa6b37"} Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.801475 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-central-agent" containerID="cri-o://bc7224d9bf84f344828f19a13fb8096ac19d517cb3bb70d8fce495b5aa46625b" gracePeriod=30 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.801686 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="proxy-httpd" containerID="cri-o://80eb788c6d10eab27f68e4afaa093b8aa3a02ead209347f52848e0e84c80db9f" gracePeriod=30 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.801814 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-notification-agent" containerID="cri-o://53e2fe665bdaeb7b9eb972106db909c519d01d1c08141b3cb40de82bd0536330" gracePeriod=30 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.801920 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="sg-core" containerID="cri-o://899f70ee131f6e530963ca573a67921fd95a35fbdae76709308568e8f0b66d06" gracePeriod=30 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.821873 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:35.988463 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.025519 4881 scope.go:117] "RemoveContainer" containerID="f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.052622 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.052684 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8kc6\" (UniqueName: \"kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.052717 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.052772 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.053383 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.053472 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.064757 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6" (OuterVolumeSpecName: "kube-api-access-h8kc6") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "kube-api-access-h8kc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.074849 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.099023 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.120052 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.120676 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="probe" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.120699 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="probe" Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.120723 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="dnsmasq-dns" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.120734 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="dnsmasq-dns" Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.120742 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="cinder-scheduler" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.120749 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="cinder-scheduler" Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.120763 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="init" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.120770 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="init" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.121060 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="dnsmasq-dns" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.121079 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="cinder-scheduler" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.121096 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="probe" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.124988 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.136808 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.147364 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.154755 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.156491 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.156522 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8kc6\" (UniqueName: \"kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.186189 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.216519 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.220488 4881 scope.go:117] "RemoveContainer" containerID="d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938" Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.221342 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938\": container with ID starting with d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938 not found: ID does not exist" containerID="d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.221384 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938"} err="failed to get container status \"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938\": rpc error: code = NotFound desc = could not find container \"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938\": container with ID starting with d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938 not found: ID does not exist" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.221412 4881 scope.go:117] "RemoveContainer" containerID="f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.225716 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.228156 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf\": container with ID starting with f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf not found: ID does not exist" containerID="f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.228205 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf"} err="failed to get container status \"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf\": rpc error: code = NotFound desc = could not find container \"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf\": container with ID starting with f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf not found: ID does not exist" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259040 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259100 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259138 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259177 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkc2q\" (UniqueName: \"kubernetes.io/projected/ab676e77-1ab3-4cab-9960-a00babfe74fb-kube-api-access-xkc2q\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259257 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab676e77-1ab3-4cab-9960-a00babfe74fb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259421 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259437 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259475 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.300683 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config" (OuterVolumeSpecName: "config") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363014 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkc2q\" (UniqueName: \"kubernetes.io/projected/ab676e77-1ab3-4cab-9960-a00babfe74fb-kube-api-access-xkc2q\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363121 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363168 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab676e77-1ab3-4cab-9960-a00babfe74fb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363254 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363301 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363346 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363399 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.364825 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab676e77-1ab3-4cab-9960-a00babfe74fb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.372855 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.376069 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.377461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.382087 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkc2q\" (UniqueName: \"kubernetes.io/projected/ab676e77-1ab3-4cab-9960-a00babfe74fb-kube-api-access-xkc2q\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.386657 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.524991 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.599719 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.615209 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7564f958f5-jmdx2"] Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.889107 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerStarted","Data":"2fa6aa1996c6f4201fc93d5c8a39f33293aba78e3cc280dea3665101a00cd065"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895412 4881 generic.go:334] "Generic (PLEG): container finished" podID="75119e97-b896-4b71-ab1f-28db45a4606d" containerID="80eb788c6d10eab27f68e4afaa093b8aa3a02ead209347f52848e0e84c80db9f" exitCode=0 Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895434 4881 generic.go:334] "Generic (PLEG): container finished" podID="75119e97-b896-4b71-ab1f-28db45a4606d" containerID="899f70ee131f6e530963ca573a67921fd95a35fbdae76709308568e8f0b66d06" exitCode=2 Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895442 4881 generic.go:334] "Generic (PLEG): container finished" podID="75119e97-b896-4b71-ab1f-28db45a4606d" containerID="bc7224d9bf84f344828f19a13fb8096ac19d517cb3bb70d8fce495b5aa46625b" exitCode=0 Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895468 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerDied","Data":"80eb788c6d10eab27f68e4afaa093b8aa3a02ead209347f52848e0e84c80db9f"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895484 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerDied","Data":"899f70ee131f6e530963ca573a67921fd95a35fbdae76709308568e8f0b66d06"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895494 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerDied","Data":"bc7224d9bf84f344828f19a13fb8096ac19d517cb3bb70d8fce495b5aa46625b"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.910320 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7564f958f5-jmdx2" event={"ID":"86a11f48-404e-4c5e-8ff4-5033a6411956","Type":"ContainerStarted","Data":"e66306d0119128d45b02df2c6c9e9269ad3c75d2a1f457ad3a5b6b7da2f4d4bf"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.934311 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" event={"ID":"b0326de6-1c1a-4e21-9592-ae86b46d7a3f","Type":"ContainerDied","Data":"74a53a8b6fc2a23210eccd53e198b676934ec49275b7b25077e7e841617ab615"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.934376 4881 scope.go:117] "RemoveContainer" containerID="74a966ab9ba8420c744ac8e1932e9ad473ca91de2100fd5d2f1bf2544fd837be" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.934626 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.059045 4881 scope.go:117] "RemoveContainer" containerID="da41cb40adea77808d3ff28a4531a5534241d5f62e3dd8c6c92475b8c399e085" Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.132134 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.140969 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.333077 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" path="/var/lib/kubelet/pods/86045f5e-defd-4c68-a582-c51c9c26e5c7/volumes" Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.334657 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" path="/var/lib/kubelet/pods/b0326de6-1c1a-4e21-9592-ae86b46d7a3f/volumes" Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.400470 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.989072 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7564f958f5-jmdx2" event={"ID":"86a11f48-404e-4c5e-8ff4-5033a6411956","Type":"ContainerStarted","Data":"616b0a41fd8a2c2ee5e28c950cc2732d336ea85ed0279baddd3033e5e8047a29"} Jan 21 11:21:38 crc kubenswrapper[4881]: I0121 11:21:38.009194 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerStarted","Data":"9208d05b46bed633028f2197d2ac1411d6db48aa25317dd65e06acc08bb66328"} Jan 21 11:21:38 crc kubenswrapper[4881]: I0121 11:21:38.014932 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab676e77-1ab3-4cab-9960-a00babfe74fb","Type":"ContainerStarted","Data":"ec697af1abb76944c05edd307cb15b0a7d14c5932e05640765d6f6ebaadd7de2"} Jan 21 11:21:38 crc kubenswrapper[4881]: I0121 11:21:38.041610 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.041571866 podStartE2EDuration="6.041571866s" podCreationTimestamp="2026-01-21 11:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:38.035404551 +0000 UTC m=+1485.295361020" watchObservedRunningTime="2026-01-21 11:21:38.041571866 +0000 UTC m=+1485.301528335" Jan 21 11:21:38 crc kubenswrapper[4881]: I0121 11:21:38.079536 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.07950211 podStartE2EDuration="6.07950211s" podCreationTimestamp="2026-01-21 11:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:38.063346017 +0000 UTC m=+1485.323302486" watchObservedRunningTime="2026-01-21 11:21:38.07950211 +0000 UTC m=+1485.339458579" Jan 21 11:21:38 crc kubenswrapper[4881]: I0121 11:21:38.941277 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:39 crc kubenswrapper[4881]: I0121 11:21:39.058105 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab676e77-1ab3-4cab-9960-a00babfe74fb","Type":"ContainerStarted","Data":"d62273d5cdeb4b121af08c0292482795d31525d1f2baaa55aa351bbc86862520"} Jan 21 11:21:39 crc kubenswrapper[4881]: I0121 11:21:39.096834 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7564f958f5-jmdx2" event={"ID":"86a11f48-404e-4c5e-8ff4-5033a6411956","Type":"ContainerStarted","Data":"7ad067f868610aee4ea7f627e59a4b3c0b472fe4011f02001c57d175d9919418"} Jan 21 11:21:39 crc kubenswrapper[4881]: I0121 11:21:39.133568 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7564f958f5-jmdx2" podStartSLOduration=4.133545866 podStartE2EDuration="4.133545866s" podCreationTimestamp="2026-01-21 11:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:39.127736881 +0000 UTC m=+1486.387693360" watchObservedRunningTime="2026-01-21 11:21:39.133545866 +0000 UTC m=+1486.393502335" Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.113886 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab676e77-1ab3-4cab-9960-a00babfe74fb","Type":"ContainerStarted","Data":"7f3bced5d39c83f298bf37457f234485d7ed500eb6155a08dcf21e5e09d9c064"} Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.114290 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.114307 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.114040 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-log" containerID="cri-o://9286d3d52dfda503e9a39d6bc904388c1d8fb7d48591cc6a081eaedbcac3451b" gracePeriod=30 Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.114072 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-httpd" containerID="cri-o://9208d05b46bed633028f2197d2ac1411d6db48aa25317dd65e06acc08bb66328" gracePeriod=30 Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.152858 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.152829357 podStartE2EDuration="4.152829357s" podCreationTimestamp="2026-01-21 11:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:40.1369128 +0000 UTC m=+1487.396869279" watchObservedRunningTime="2026-01-21 11:21:40.152829357 +0000 UTC m=+1487.412785826" Jan 21 11:21:41 crc kubenswrapper[4881]: I0121 11:21:41.129879 4881 generic.go:334] "Generic (PLEG): container finished" podID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerID="9286d3d52dfda503e9a39d6bc904388c1d8fb7d48591cc6a081eaedbcac3451b" exitCode=143 Jan 21 11:21:41 crc kubenswrapper[4881]: I0121 11:21:41.129991 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerDied","Data":"9286d3d52dfda503e9a39d6bc904388c1d8fb7d48591cc6a081eaedbcac3451b"} Jan 21 11:21:41 crc kubenswrapper[4881]: I0121 11:21:41.526368 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.009130 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.009673 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-log" containerID="cri-o://f3bc5d7bc188f1c4ac565e1d75e559e4a8e17c15c9ed4b157de750543aaa6b37" gracePeriod=30 Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.009903 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-httpd" containerID="cri-o://2fa6aa1996c6f4201fc93d5c8a39f33293aba78e3cc280dea3665101a00cd065" gracePeriod=30 Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.170072 4881 generic.go:334] "Generic (PLEG): container finished" podID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerID="9208d05b46bed633028f2197d2ac1411d6db48aa25317dd65e06acc08bb66328" exitCode=0 Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.170162 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerDied","Data":"9208d05b46bed633028f2197d2ac1411d6db48aa25317dd65e06acc08bb66328"} Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.175187 4881 generic.go:334] "Generic (PLEG): container finished" podID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerID="f3bc5d7bc188f1c4ac565e1d75e559e4a8e17c15c9ed4b157de750543aaa6b37" exitCode=143 Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.175259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerDied","Data":"f3bc5d7bc188f1c4ac565e1d75e559e4a8e17c15c9ed4b157de750543aaa6b37"} Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.178985 4881 generic.go:334] "Generic (PLEG): container finished" podID="75119e97-b896-4b71-ab1f-28db45a4606d" containerID="53e2fe665bdaeb7b9eb972106db909c519d01d1c08141b3cb40de82bd0536330" exitCode=0 Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.179103 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerDied","Data":"53e2fe665bdaeb7b9eb972106db909c519d01d1c08141b3cb40de82bd0536330"} Jan 21 11:21:43 crc kubenswrapper[4881]: I0121 11:21:43.191531 4881 generic.go:334] "Generic (PLEG): container finished" podID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerID="2fa6aa1996c6f4201fc93d5c8a39f33293aba78e3cc280dea3665101a00cd065" exitCode=0 Jan 21 11:21:43 crc kubenswrapper[4881]: I0121 11:21:43.191631 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerDied","Data":"2fa6aa1996c6f4201fc93d5c8a39f33293aba78e3cc280dea3665101a00cd065"} Jan 21 11:21:44 crc kubenswrapper[4881]: I0121 11:21:44.311028 4881 scope.go:117] "RemoveContainer" containerID="5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0" Jan 21 11:21:45 crc kubenswrapper[4881]: I0121 11:21:45.835368 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:45 crc kubenswrapper[4881]: I0121 11:21:45.846914 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.708698 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.893640 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.895672 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.895743 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.895829 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.895914 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cmwf\" (UniqueName: \"kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.895952 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.896040 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.896164 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.896957 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.897203 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.900962 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.918145 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf" (OuterVolumeSpecName: "kube-api-access-2cmwf") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "kube-api-access-2cmwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.934125 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts" (OuterVolumeSpecName: "scripts") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.000940 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.005937 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006000 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006042 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006063 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006095 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006123 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6fqw\" (UniqueName: \"kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006215 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006234 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006269 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006300 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh585\" (UniqueName: \"kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006324 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006403 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006443 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006550 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006583 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006609 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.007089 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.007114 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.007123 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.007133 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cmwf\" (UniqueName: \"kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.007143 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.021037 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.021347 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs" (OuterVolumeSpecName: "logs") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.022046 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.022236 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs" (OuterVolumeSpecName: "logs") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.054807 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.055561 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts" (OuterVolumeSpecName: "scripts") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.057240 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw" (OuterVolumeSpecName: "kube-api-access-v6fqw") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "kube-api-access-v6fqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.058025 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585" (OuterVolumeSpecName: "kube-api-access-xh585") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "kube-api-access-xh585". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.068228 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts" (OuterVolumeSpecName: "scripts") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.069082 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.088999 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121326 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121373 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121400 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121415 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121427 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121451 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121463 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121474 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121485 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6fqw\" (UniqueName: \"kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121497 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh585\" (UniqueName: \"kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121507 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.154511 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.180102 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.197699 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.228367 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.228405 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.228415 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.235317 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.303589 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.306350 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.315609 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data" (OuterVolumeSpecName: "config-data") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.317011 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.327383 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.329868 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data" (OuterVolumeSpecName: "config-data") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.329946 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.330632 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.330654 4881 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.330668 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: W0121 11:21:47.330800 4881 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5a22f004-7d84-4edc-86f7-d58adb131a45/volumes/kubernetes.io~secret/config-data Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.330817 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data" (OuterVolumeSpecName: "config-data") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.337211 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.343086 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerDied","Data":"9b7298fa3a3fcd477e8d84c1587f761e32e00a24d488249df9cca1ca349c7bc0"} Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.343153 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerDied","Data":"c118bf221673b7075db16b12d92f917f44d316d1edbfb63816381a8a7fe9bfa7"} Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.343174 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerDied","Data":"d67de62ed844d45b06b45329375dde0d59a63d15e298263c3618894b7576c1ba"} Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.343202 4881 scope.go:117] "RemoveContainer" containerID="80eb788c6d10eab27f68e4afaa093b8aa3a02ead209347f52848e0e84c80db9f" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.350415 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.357531 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerStarted","Data":"4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d"} Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.360479 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b0b6ce2c-5ae8-496f-9374-d3069bc3d149","Type":"ContainerStarted","Data":"68d1d3fbf220c6872fbb3ed3d2d8517f6217ec6ebfb2a0e3e14a3c8a97c0baab"} Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.375370 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data" (OuterVolumeSpecName: "config-data") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.392670 4881 scope.go:117] "RemoveContainer" containerID="899f70ee131f6e530963ca573a67921fd95a35fbdae76709308568e8f0b66d06" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.423054 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.527259183 podStartE2EDuration="23.423031059s" podCreationTimestamp="2026-01-21 11:21:24 +0000 UTC" firstStartedPulling="2026-01-21 11:21:26.162955815 +0000 UTC m=+1473.422912274" lastFinishedPulling="2026-01-21 11:21:46.058727681 +0000 UTC m=+1493.318684150" observedRunningTime="2026-01-21 11:21:47.405721209 +0000 UTC m=+1494.665677688" watchObservedRunningTime="2026-01-21 11:21:47.423031059 +0000 UTC m=+1494.682987528" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.431272 4881 scope.go:117] "RemoveContainer" containerID="53e2fe665bdaeb7b9eb972106db909c519d01d1c08141b3cb40de82bd0536330" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.439170 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.439476 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.439675 4881 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.458881 4881 scope.go:117] "RemoveContainer" containerID="bc7224d9bf84f344828f19a13fb8096ac19d517cb3bb70d8fce495b5aa46625b" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.492013 4881 scope.go:117] "RemoveContainer" containerID="9208d05b46bed633028f2197d2ac1411d6db48aa25317dd65e06acc08bb66328" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.524415 4881 scope.go:117] "RemoveContainer" containerID="9286d3d52dfda503e9a39d6bc904388c1d8fb7d48591cc6a081eaedbcac3451b" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.551079 4881 scope.go:117] "RemoveContainer" containerID="2fa6aa1996c6f4201fc93d5c8a39f33293aba78e3cc280dea3665101a00cd065" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.580883 4881 scope.go:117] "RemoveContainer" containerID="f3bc5d7bc188f1c4ac565e1d75e559e4a8e17c15c9ed4b157de750543aaa6b37" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.663664 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.682775 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.702685 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.719668 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.728829 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729370 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729392 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729404 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729411 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729424 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-central-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729433 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-central-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729445 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="proxy-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729453 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="proxy-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729476 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-notification-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729483 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-notification-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729497 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="sg-core" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729505 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="sg-core" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729519 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729524 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729544 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729550 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729773 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729813 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="proxy-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729829 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729843 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729857 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="sg-core" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729872 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-notification-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729884 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729894 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-central-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.732243 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.742392 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.742615 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.743337 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.758585 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.771536 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.780559 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.782501 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.793337 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.805609 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.807409 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.807612 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.808448 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.808521 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.814014 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.814318 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.819227 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8snw" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.832930 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.855247 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr9jz\" (UniqueName: \"kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.855647 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.855824 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.856022 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.856173 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.856314 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.856435 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.958807 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.958874 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.958908 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.958973 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959001 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njpb8\" (UniqueName: \"kubernetes.io/projected/ec8e0779-1552-4ebb-88d7-95a49e734b55-kube-api-access-njpb8\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959030 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959052 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959072 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d665\" (UniqueName: \"kubernetes.io/projected/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-kube-api-access-6d665\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959095 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959132 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr9jz\" (UniqueName: \"kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959154 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959184 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959248 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959273 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-logs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959293 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959338 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959359 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959409 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959449 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959511 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959533 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959584 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.965510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.967008 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.972508 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.973511 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.974740 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.998847 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.008239 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr9jz\" (UniqueName: \"kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061069 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061135 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061181 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njpb8\" (UniqueName: \"kubernetes.io/projected/ec8e0779-1552-4ebb-88d7-95a49e734b55-kube-api-access-njpb8\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061234 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061255 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061278 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d665\" (UniqueName: \"kubernetes.io/projected/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-kube-api-access-6d665\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061332 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061362 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061428 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061453 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-logs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061474 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061504 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061559 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061919 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.062037 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.062458 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-logs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.062551 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.062570 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.062918 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.074897 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.076714 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.076713 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.077792 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.078891 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.079338 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.079947 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.081592 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.083739 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njpb8\" (UniqueName: \"kubernetes.io/projected/ec8e0779-1552-4ebb-88d7-95a49e734b55-kube-api-access-njpb8\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.086509 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d665\" (UniqueName: \"kubernetes.io/projected/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-kube-api-access-6d665\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.100840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.120800 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.137607 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.157217 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.182359 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.785851 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.967745 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:48 crc kubenswrapper[4881]: W0121 11:21:48.970653 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec8e0779_1552_4ebb_88d7_95a49e734b55.slice/crio-39110a92c180d47914e6a9442ccb9e89aabc202538f5a509c78ad2619ec9a5f9 WatchSource:0}: Error finding container 39110a92c180d47914e6a9442ccb9e89aabc202538f5a509c78ad2619ec9a5f9: Status 404 returned error can't find the container with id 39110a92c180d47914e6a9442ccb9e89aabc202538f5a509c78ad2619ec9a5f9 Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.324205 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" path="/var/lib/kubelet/pods/5a22f004-7d84-4edc-86f7-d58adb131a45/volumes" Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.325393 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" path="/var/lib/kubelet/pods/75119e97-b896-4b71-ab1f-28db45a4606d/volumes" Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.326766 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" path="/var/lib/kubelet/pods/86debe8b-5d02-4f2e-a311-6106609aeb1c/volumes" Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.413633 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec8e0779-1552-4ebb-88d7-95a49e734b55","Type":"ContainerStarted","Data":"39110a92c180d47914e6a9442ccb9e89aabc202538f5a509c78ad2619ec9a5f9"} Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.423601 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerStarted","Data":"5ef74248d816cbba0967845a616d8ff93c71875da1f2537b3583d30494d188a0"} Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.423654 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerStarted","Data":"95c906c4b339a07e39ec45c37bd23642eb30462373347c321f4ca0cc4f7e8653"} Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.498119 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.545327 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.556863 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:50 crc kubenswrapper[4881]: I0121 11:21:50.440380 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec8e0779-1552-4ebb-88d7-95a49e734b55","Type":"ContainerStarted","Data":"8bcb045bcc62c4f01ca1a6052f969375e7aa0b8011729a55dd9e236ba89e4036"} Jan 21 11:21:50 crc kubenswrapper[4881]: I0121 11:21:50.450178 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerStarted","Data":"53f83f934fef330d755d320c983315d32feeaac6da62dbb78c115b45e16f216a"} Jan 21 11:21:50 crc kubenswrapper[4881]: I0121 11:21:50.452302 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e7b52fc-b295-475c-bef6-074b1cb2a2f5","Type":"ContainerStarted","Data":"76bc3f20d39aef05146ba621d24aec9817e955bcea55e3efe174d033160d4c2f"} Jan 21 11:21:50 crc kubenswrapper[4881]: I0121 11:21:50.452568 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:50 crc kubenswrapper[4881]: I0121 11:21:50.517156 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.069557 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.479225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerStarted","Data":"0967e57a0feff48d2185c1e282e0585b131cee338ade45ea85673a62193b1f57"} Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.481798 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e7b52fc-b295-475c-bef6-074b1cb2a2f5","Type":"ContainerStarted","Data":"55dc30928b183d510f03cc70c0e25705360cd5d87786d3622db2ad0b70290c03"} Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.491038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec8e0779-1552-4ebb-88d7-95a49e734b55","Type":"ContainerStarted","Data":"b1c6240f599b9b984ffca9fcfd23cfeb7e6e9f84572b199e0dc9b03860eae9e1"} Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.515342 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.515319601 podStartE2EDuration="4.515319601s" podCreationTimestamp="2026-01-21 11:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:51.508331617 +0000 UTC m=+1498.768288106" watchObservedRunningTime="2026-01-21 11:21:51.515319601 +0000 UTC m=+1498.775276070" Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.536339 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.536314773 podStartE2EDuration="4.536314773s" podCreationTimestamp="2026-01-21 11:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:51.531975855 +0000 UTC m=+1498.791932324" watchObservedRunningTime="2026-01-21 11:21:51.536314773 +0000 UTC m=+1498.796271242" Jan 21 11:21:52 crc kubenswrapper[4881]: I0121 11:21:52.504682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e7b52fc-b295-475c-bef6-074b1cb2a2f5","Type":"ContainerStarted","Data":"a297f0a54d599fae684fb0eb10035eee89893af8546f6e640b1500c94c2b065d"} Jan 21 11:21:52 crc kubenswrapper[4881]: I0121 11:21:52.504944 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" containerID="cri-o://4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" gracePeriod=30 Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.040286 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-b85xv"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.051493 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.054926 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-b85xv"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.082218 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.082290 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4g8z\" (UniqueName: \"kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.160877 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jdk2x"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.162543 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.182949 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jdk2x"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.184973 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.185076 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.185116 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4g8z\" (UniqueName: \"kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.185227 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9h2\" (UniqueName: \"kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.186287 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.219715 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4g8z\" (UniqueName: \"kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.287481 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.287936 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9h2\" (UniqueName: \"kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.288806 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.333749 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9h2\" (UniqueName: \"kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.377182 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-fb46-account-create-update-xxwmq"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.378466 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.391020 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.410162 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9n8\" (UniqueName: \"kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.410594 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.424927 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.492912 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-f99bl"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.516490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.519997 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9n8\" (UniqueName: \"kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.527248 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.551111 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fb46-account-create-update-xxwmq"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.551248 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.612482 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.629733 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9n8\" (UniqueName: \"kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.670284 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerStarted","Data":"786551fea0a0b08ed4797eaa4ac0bd544644fed6b4135ad7593d1cf541bbe884"} Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.670378 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.727420 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.731768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.732077 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhnvm\" (UniqueName: \"kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.741180 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f99bl"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.834237 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhnvm\" (UniqueName: \"kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.834319 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.842451 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.897299 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhnvm\" (UniqueName: \"kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.933083 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5627-account-create-update-mbnwf"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.934540 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.939563 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.989926 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5627-account-create-update-mbnwf"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.016513 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.556144986 podStartE2EDuration="7.016489137s" podCreationTimestamp="2026-01-21 11:21:47 +0000 UTC" firstStartedPulling="2026-01-21 11:21:48.80186767 +0000 UTC m=+1496.061824129" lastFinishedPulling="2026-01-21 11:21:52.262211811 +0000 UTC m=+1499.522168280" observedRunningTime="2026-01-21 11:21:53.723420122 +0000 UTC m=+1500.983376591" watchObservedRunningTime="2026-01-21 11:21:54.016489137 +0000 UTC m=+1501.276445606" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.040329 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dmdm\" (UniqueName: \"kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.040553 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.055091 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b4dc-account-create-update-46bk2"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.059246 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.064562 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b4dc-account-create-update-46bk2"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.064943 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.083807 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.144279 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.144654 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.144712 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfnm\" (UniqueName: \"kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.146244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dmdm\" (UniqueName: \"kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.147336 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.188442 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dmdm\" (UniqueName: \"kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.248191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.248251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfnm\" (UniqueName: \"kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.249031 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.277284 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfnm\" (UniqueName: \"kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.293177 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.468883 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.502476 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.661434 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-b85xv"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.809868 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jdk2x"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.825946 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fb46-account-create-update-xxwmq"] Jan 21 11:21:54 crc kubenswrapper[4881]: W0121 11:21:54.830029 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod502efce3_0d16_491d_b6fa_1b1d98f76d4b.slice/crio-35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b WatchSource:0}: Error finding container 35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b: Status 404 returned error can't find the container with id 35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b Jan 21 11:21:54 crc kubenswrapper[4881]: W0121 11:21:54.843697 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29487dae_24e9_4d5b_9819_99516df78630.slice/crio-6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c WatchSource:0}: Error finding container 6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c: Status 404 returned error can't find the container with id 6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.853066 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.147343 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f99bl"] Jan 21 11:21:55 crc kubenswrapper[4881]: W0121 11:21:55.209664 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde50b4a3_643f_4e4a_9853_b794eae5c08c.slice/crio-f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964 WatchSource:0}: Error finding container f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964: Status 404 returned error can't find the container with id f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.230991 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5627-account-create-update-mbnwf"] Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.272664 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b4dc-account-create-update-46bk2"] Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.716046 4881 generic.go:334] "Generic (PLEG): container finished" podID="502efce3-0d16-491d-b6fa-1b1d98f76d4b" containerID="3e8735972d4959fbfdcc07dada19674d2a9110125d71fdfe160979bcc5be0481" exitCode=0 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.716162 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jdk2x" event={"ID":"502efce3-0d16-491d-b6fa-1b1d98f76d4b","Type":"ContainerDied","Data":"3e8735972d4959fbfdcc07dada19674d2a9110125d71fdfe160979bcc5be0481"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.716198 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jdk2x" event={"ID":"502efce3-0d16-491d-b6fa-1b1d98f76d4b","Type":"ContainerStarted","Data":"35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.718795 4881 generic.go:334] "Generic (PLEG): container finished" podID="2a601b0e-b326-4e55-901e-08a32fe24005" containerID="5d3f34869256c4d21e6b17d94ceaa6baf87aefe4c608982c7e1561bfc3b81de2" exitCode=0 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.718927 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b85xv" event={"ID":"2a601b0e-b326-4e55-901e-08a32fe24005","Type":"ContainerDied","Data":"5d3f34869256c4d21e6b17d94ceaa6baf87aefe4c608982c7e1561bfc3b81de2"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.718949 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b85xv" event={"ID":"2a601b0e-b326-4e55-901e-08a32fe24005","Type":"ContainerStarted","Data":"a7ef229f2fb104b9e8cc424559b0f8a908033c5487165445292865d3e0cdb0fb"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.722270 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb46-account-create-update-xxwmq" event={"ID":"29487dae-24e9-4d5b-9819-99516df78630","Type":"ContainerStarted","Data":"dccd9ebbabd2787629df88e189e045b4233f9efdaa17a33f088ad8c951d3530a"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.722329 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb46-account-create-update-xxwmq" event={"ID":"29487dae-24e9-4d5b-9819-99516df78630","Type":"ContainerStarted","Data":"6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.727881 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" event={"ID":"4d8a04fd-1a86-454f-bd69-64ad270b8357","Type":"ContainerStarted","Data":"27659f5aab69bf4af66ab9aeb1d61a07fd49c77e8daa35d08cb33096b28e9074"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.727942 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" event={"ID":"4d8a04fd-1a86-454f-bd69-64ad270b8357","Type":"ContainerStarted","Data":"5af4b877aa6f4206f95841c9ad3225a13be2d82d1149e72ace1f40c99f028477"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.731772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f99bl" event={"ID":"f2c35a47-0e6e-4760-9026-617ca187b066","Type":"ContainerStarted","Data":"e072378bb8b79adf91d2701f6ed4a0743a1956ccf92868309d50c74d1a40ff46"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.731847 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f99bl" event={"ID":"f2c35a47-0e6e-4760-9026-617ca187b066","Type":"ContainerStarted","Data":"c609580b7b4676d9f33d5da30b233c4958836e02e51a3088b77cdd78db145b29"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.741492 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-central-agent" containerID="cri-o://5ef74248d816cbba0967845a616d8ff93c71875da1f2537b3583d30494d188a0" gracePeriod=30 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.742884 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" event={"ID":"de50b4a3-643f-4e4a-9853-b794eae5c08c","Type":"ContainerStarted","Data":"22038197b765a72901f7e4d04d0bebb17e8d3bca09464adc6dc75e99375c24ab"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.742918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" event={"ID":"de50b4a3-643f-4e4a-9853-b794eae5c08c","Type":"ContainerStarted","Data":"f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.742994 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="proxy-httpd" containerID="cri-o://786551fea0a0b08ed4797eaa4ac0bd544644fed6b4135ad7593d1cf541bbe884" gracePeriod=30 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.743058 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="sg-core" containerID="cri-o://0967e57a0feff48d2185c1e282e0585b131cee338ade45ea85673a62193b1f57" gracePeriod=30 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.743115 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-notification-agent" containerID="cri-o://53f83f934fef330d755d320c983315d32feeaac6da62dbb78c115b45e16f216a" gracePeriod=30 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.890924 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-f99bl" podStartSLOduration=2.890902713 podStartE2EDuration="2.890902713s" podCreationTimestamp="2026-01-21 11:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:55.825639648 +0000 UTC m=+1503.085596117" watchObservedRunningTime="2026-01-21 11:21:55.890902713 +0000 UTC m=+1503.150859182" Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.908106 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" podStartSLOduration=2.9080798 podStartE2EDuration="2.9080798s" podCreationTimestamp="2026-01-21 11:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:55.85862998 +0000 UTC m=+1503.118586459" watchObservedRunningTime="2026-01-21 11:21:55.9080798 +0000 UTC m=+1503.168036269" Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.909916 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-fb46-account-create-update-xxwmq" podStartSLOduration=2.909905777 podStartE2EDuration="2.909905777s" podCreationTimestamp="2026-01-21 11:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:55.898714268 +0000 UTC m=+1503.158670737" watchObservedRunningTime="2026-01-21 11:21:55.909905777 +0000 UTC m=+1503.169862246" Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.939638 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" podStartSLOduration=2.939610916 podStartE2EDuration="2.939610916s" podCreationTimestamp="2026-01-21 11:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:55.929304929 +0000 UTC m=+1503.189261398" watchObservedRunningTime="2026-01-21 11:21:55.939610916 +0000 UTC m=+1503.199567385" Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.766688 4881 generic.go:334] "Generic (PLEG): container finished" podID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerID="786551fea0a0b08ed4797eaa4ac0bd544644fed6b4135ad7593d1cf541bbe884" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.767753 4881 generic.go:334] "Generic (PLEG): container finished" podID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerID="0967e57a0feff48d2185c1e282e0585b131cee338ade45ea85673a62193b1f57" exitCode=2 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.767895 4881 generic.go:334] "Generic (PLEG): container finished" podID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerID="53f83f934fef330d755d320c983315d32feeaac6da62dbb78c115b45e16f216a" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.767924 4881 generic.go:334] "Generic (PLEG): container finished" podID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerID="5ef74248d816cbba0967845a616d8ff93c71875da1f2537b3583d30494d188a0" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.766911 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerDied","Data":"786551fea0a0b08ed4797eaa4ac0bd544644fed6b4135ad7593d1cf541bbe884"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.768153 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerDied","Data":"0967e57a0feff48d2185c1e282e0585b131cee338ade45ea85673a62193b1f57"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.768185 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerDied","Data":"53f83f934fef330d755d320c983315d32feeaac6da62dbb78c115b45e16f216a"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.768197 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerDied","Data":"5ef74248d816cbba0967845a616d8ff93c71875da1f2537b3583d30494d188a0"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.777103 4881 generic.go:334] "Generic (PLEG): container finished" podID="29487dae-24e9-4d5b-9819-99516df78630" containerID="dccd9ebbabd2787629df88e189e045b4233f9efdaa17a33f088ad8c951d3530a" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.777171 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb46-account-create-update-xxwmq" event={"ID":"29487dae-24e9-4d5b-9819-99516df78630","Type":"ContainerDied","Data":"dccd9ebbabd2787629df88e189e045b4233f9efdaa17a33f088ad8c951d3530a"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.784314 4881 generic.go:334] "Generic (PLEG): container finished" podID="4d8a04fd-1a86-454f-bd69-64ad270b8357" containerID="27659f5aab69bf4af66ab9aeb1d61a07fd49c77e8daa35d08cb33096b28e9074" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.784482 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" event={"ID":"4d8a04fd-1a86-454f-bd69-64ad270b8357","Type":"ContainerDied","Data":"27659f5aab69bf4af66ab9aeb1d61a07fd49c77e8daa35d08cb33096b28e9074"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.788408 4881 generic.go:334] "Generic (PLEG): container finished" podID="f2c35a47-0e6e-4760-9026-617ca187b066" containerID="e072378bb8b79adf91d2701f6ed4a0743a1956ccf92868309d50c74d1a40ff46" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.788634 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f99bl" event={"ID":"f2c35a47-0e6e-4760-9026-617ca187b066","Type":"ContainerDied","Data":"e072378bb8b79adf91d2701f6ed4a0743a1956ccf92868309d50c74d1a40ff46"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.790767 4881 generic.go:334] "Generic (PLEG): container finished" podID="de50b4a3-643f-4e4a-9853-b794eae5c08c" containerID="22038197b765a72901f7e4d04d0bebb17e8d3bca09464adc6dc75e99375c24ab" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.791099 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" event={"ID":"de50b4a3-643f-4e4a-9853-b794eae5c08c","Type":"ContainerDied","Data":"22038197b765a72901f7e4d04d0bebb17e8d3bca09464adc6dc75e99375c24ab"} Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.160303 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.253630 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.253743 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.253871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.253925 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.254010 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.254124 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.254178 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr9jz\" (UniqueName: \"kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.259840 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.269134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.275168 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz" (OuterVolumeSpecName: "kube-api-access-hr9jz") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "kube-api-access-hr9jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.287072 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts" (OuterVolumeSpecName: "scripts") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.363137 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.363170 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr9jz\" (UniqueName: \"kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.363196 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.363205 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.396275 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.600897 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.674691 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.704087 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.730485 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data" (OuterVolumeSpecName: "config-data") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.845178 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.845377 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerDied","Data":"95c906c4b339a07e39ec45c37bd23642eb30462373347c321f4ca0cc4f7e8653"} Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.845452 4881 scope.go:117] "RemoveContainer" containerID="786551fea0a0b08ed4797eaa4ac0bd544644fed6b4135ad7593d1cf541bbe884" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.893262 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.956428 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.967630 4881 scope.go:117] "RemoveContainer" containerID="0967e57a0feff48d2185c1e282e0585b131cee338ade45ea85673a62193b1f57" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.972859 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.982117 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.049963 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.076835 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077335 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502efce3-0d16-491d-b6fa-1b1d98f76d4b" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077355 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="502efce3-0d16-491d-b6fa-1b1d98f76d4b" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077376 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a601b0e-b326-4e55-901e-08a32fe24005" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077383 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a601b0e-b326-4e55-901e-08a32fe24005" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077428 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="sg-core" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077437 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="sg-core" Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077447 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-notification-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077453 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-notification-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077463 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-central-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077469 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-central-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077488 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="proxy-httpd" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077494 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="proxy-httpd" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077663 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="502efce3-0d16-491d-b6fa-1b1d98f76d4b" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077674 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-central-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077689 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="sg-core" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077698 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="proxy-httpd" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077705 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a601b0e-b326-4e55-901e-08a32fe24005" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077714 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-notification-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.079522 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.082623 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.082901 4881 scope.go:117] "RemoveContainer" containerID="53f83f934fef330d755d320c983315d32feeaac6da62dbb78c115b45e16f216a" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.083094 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.096755 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9h2\" (UniqueName: \"kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2\") pod \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.096883 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4g8z\" (UniqueName: \"kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z\") pod \"2a601b0e-b326-4e55-901e-08a32fe24005\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.097056 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts\") pod \"2a601b0e-b326-4e55-901e-08a32fe24005\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.097180 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts\") pod \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.098612 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a601b0e-b326-4e55-901e-08a32fe24005" (UID: "2a601b0e-b326-4e55-901e-08a32fe24005"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.098721 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "502efce3-0d16-491d-b6fa-1b1d98f76d4b" (UID: "502efce3-0d16-491d-b6fa-1b1d98f76d4b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.101254 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.104418 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z" (OuterVolumeSpecName: "kube-api-access-s4g8z") pod "2a601b0e-b326-4e55-901e-08a32fe24005" (UID: "2a601b0e-b326-4e55-901e-08a32fe24005"). InnerVolumeSpecName "kube-api-access-s4g8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.114232 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2" (OuterVolumeSpecName: "kube-api-access-5x9h2") pod "502efce3-0d16-491d-b6fa-1b1d98f76d4b" (UID: "502efce3-0d16-491d-b6fa-1b1d98f76d4b"). InnerVolumeSpecName "kube-api-access-5x9h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.144859 4881 scope.go:117] "RemoveContainer" containerID="5ef74248d816cbba0967845a616d8ff93c71875da1f2537b3583d30494d188a0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.159219 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.159262 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.183698 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.183874 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.200864 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.200940 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201015 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201045 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201091 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzvfr\" (UniqueName: \"kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201115 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201130 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201226 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201240 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201252 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9h2\" (UniqueName: \"kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201264 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4g8z\" (UniqueName: \"kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.212567 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.230259 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.250621 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.263313 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.277324 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305279 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305407 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305645 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305835 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzvfr\" (UniqueName: \"kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305899 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305918 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.306474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.312273 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.313807 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.320419 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.333643 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.338666 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzvfr\" (UniqueName: \"kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.338960 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.409662 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfnm\" (UniqueName: \"kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm\") pod \"4d8a04fd-1a86-454f-bd69-64ad270b8357\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.409740 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts\") pod \"4d8a04fd-1a86-454f-bd69-64ad270b8357\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.411684 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d8a04fd-1a86-454f-bd69-64ad270b8357" (UID: "4d8a04fd-1a86-454f-bd69-64ad270b8357"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.413582 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.415227 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm" (OuterVolumeSpecName: "kube-api-access-qzfnm") pod "4d8a04fd-1a86-454f-bd69-64ad270b8357" (UID: "4d8a04fd-1a86-454f-bd69-64ad270b8357"). InnerVolumeSpecName "kube-api-access-qzfnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.415445 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.515531 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfnm\" (UniqueName: \"kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.888313 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jdk2x" event={"ID":"502efce3-0d16-491d-b6fa-1b1d98f76d4b","Type":"ContainerDied","Data":"35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.888657 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.888744 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.894097 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b85xv" event={"ID":"2a601b0e-b326-4e55-901e-08a32fe24005","Type":"ContainerDied","Data":"a7ef229f2fb104b9e8cc424559b0f8a908033c5487165445292865d3e0cdb0fb"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.894141 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ef229f2fb104b9e8cc424559b0f8a908033c5487165445292865d3e0cdb0fb" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.894165 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.899736 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb46-account-create-update-xxwmq" event={"ID":"29487dae-24e9-4d5b-9819-99516df78630","Type":"ContainerDied","Data":"6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.899798 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.904230 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" event={"ID":"4d8a04fd-1a86-454f-bd69-64ad270b8357","Type":"ContainerDied","Data":"5af4b877aa6f4206f95841c9ad3225a13be2d82d1149e72ace1f40c99f028477"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.904284 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af4b877aa6f4206f95841c9ad3225a13be2d82d1149e72ace1f40c99f028477" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.904363 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.911466 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f99bl" event={"ID":"f2c35a47-0e6e-4760-9026-617ca187b066","Type":"ContainerDied","Data":"c609580b7b4676d9f33d5da30b233c4958836e02e51a3088b77cdd78db145b29"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.911516 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c609580b7b4676d9f33d5da30b233c4958836e02e51a3088b77cdd78db145b29" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917210 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" event={"ID":"de50b4a3-643f-4e4a-9853-b794eae5c08c","Type":"ContainerDied","Data":"f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917282 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917316 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917333 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917521 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917573 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.973385 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.992543 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.005155 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.040249 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dmdm\" (UniqueName: \"kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm\") pod \"de50b4a3-643f-4e4a-9853-b794eae5c08c\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.040675 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts\") pod \"29487dae-24e9-4d5b-9819-99516df78630\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.040833 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts\") pod \"de50b4a3-643f-4e4a-9853-b794eae5c08c\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.041108 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f9n8\" (UniqueName: \"kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8\") pod \"29487dae-24e9-4d5b-9819-99516df78630\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.042344 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29487dae-24e9-4d5b-9819-99516df78630" (UID: "29487dae-24e9-4d5b-9819-99516df78630"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.045568 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de50b4a3-643f-4e4a-9853-b794eae5c08c" (UID: "de50b4a3-643f-4e4a-9853-b794eae5c08c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.051270 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm" (OuterVolumeSpecName: "kube-api-access-4dmdm") pod "de50b4a3-643f-4e4a-9853-b794eae5c08c" (UID: "de50b4a3-643f-4e4a-9853-b794eae5c08c"). InnerVolumeSpecName "kube-api-access-4dmdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.053990 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8" (OuterVolumeSpecName: "kube-api-access-6f9n8") pod "29487dae-24e9-4d5b-9819-99516df78630" (UID: "29487dae-24e9-4d5b-9819-99516df78630"). InnerVolumeSpecName "kube-api-access-6f9n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.130620 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.147151 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhnvm\" (UniqueName: \"kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm\") pod \"f2c35a47-0e6e-4760-9026-617ca187b066\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.147298 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts\") pod \"f2c35a47-0e6e-4760-9026-617ca187b066\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.148091 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dmdm\" (UniqueName: \"kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.148124 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.148138 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.148149 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f9n8\" (UniqueName: \"kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.148648 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2c35a47-0e6e-4760-9026-617ca187b066" (UID: "f2c35a47-0e6e-4760-9026-617ca187b066"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: W0121 11:21:59.148964 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ca8213_9b24_4785_9570_d2973570fbdc.slice/crio-84387e9f1eda4be1e2e13f245e7866daad306dd7bc81eda92adfe5267e83ba52 WatchSource:0}: Error finding container 84387e9f1eda4be1e2e13f245e7866daad306dd7bc81eda92adfe5267e83ba52: Status 404 returned error can't find the container with id 84387e9f1eda4be1e2e13f245e7866daad306dd7bc81eda92adfe5267e83ba52 Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.157372 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm" (OuterVolumeSpecName: "kube-api-access-lhnvm") pod "f2c35a47-0e6e-4760-9026-617ca187b066" (UID: "f2c35a47-0e6e-4760-9026-617ca187b066"). InnerVolumeSpecName "kube-api-access-lhnvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.251270 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhnvm\" (UniqueName: \"kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.251650 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.325608 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" path="/var/lib/kubelet/pods/d84ba548-9d82-44b7-bae5-bf8cf84ecc79/volumes" Jan 21 11:21:59 crc kubenswrapper[4881]: E0121 11:21:59.501458 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 11:21:59 crc kubenswrapper[4881]: E0121 11:21:59.503545 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 11:21:59 crc kubenswrapper[4881]: E0121 11:21:59.504974 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 11:21:59 crc kubenswrapper[4881]: E0121 11:21:59.505011 4881 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.933217 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.933247 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerStarted","Data":"84387e9f1eda4be1e2e13f245e7866daad306dd7bc81eda92adfe5267e83ba52"} Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.933301 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.933404 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:22:00 crc kubenswrapper[4881]: I0121 11:22:00.945876 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:22:00 crc kubenswrapper[4881]: I0121 11:22:00.946231 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:22:01 crc kubenswrapper[4881]: I0121 11:22:01.001487 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:02 crc kubenswrapper[4881]: I0121 11:22:02.981259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerStarted","Data":"461544715f4a3f154544e0f37c4e4bbc147310a0bd62815eae5302504de75f07"} Jan 21 11:22:02 crc kubenswrapper[4881]: I0121 11:22:02.981839 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerStarted","Data":"05eebaca7eead0950dd873a8603c6201a9b2dc1e384271cdb00b8530ee218101"} Jan 21 11:22:03 crc kubenswrapper[4881]: I0121 11:22:03.998682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerStarted","Data":"bf7c6034e2c42d9e693656ae69979f8a5455f71ca251857c2ffd4e50430c4b59"} Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.166377 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f7mmp"] Jan 21 11:22:04 crc kubenswrapper[4881]: E0121 11:22:04.166887 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de50b4a3-643f-4e4a-9853-b794eae5c08c" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.166908 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="de50b4a3-643f-4e4a-9853-b794eae5c08c" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: E0121 11:22:04.166927 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c35a47-0e6e-4760-9026-617ca187b066" containerName="mariadb-database-create" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.166934 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c35a47-0e6e-4760-9026-617ca187b066" containerName="mariadb-database-create" Jan 21 11:22:04 crc kubenswrapper[4881]: E0121 11:22:04.166949 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d8a04fd-1a86-454f-bd69-64ad270b8357" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.166955 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d8a04fd-1a86-454f-bd69-64ad270b8357" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: E0121 11:22:04.166966 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29487dae-24e9-4d5b-9819-99516df78630" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.166972 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="29487dae-24e9-4d5b-9819-99516df78630" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.167154 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="de50b4a3-643f-4e4a-9853-b794eae5c08c" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.167170 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d8a04fd-1a86-454f-bd69-64ad270b8357" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.167191 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="29487dae-24e9-4d5b-9819-99516df78630" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.167211 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2c35a47-0e6e-4760-9026-617ca187b066" containerName="mariadb-database-create" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.168424 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.174808 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.175006 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fjj24" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.175223 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.198350 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f7mmp"] Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.326110 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.326192 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfw75\" (UniqueName: \"kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.326266 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.326304 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.432296 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.432410 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.432578 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.432658 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfw75\" (UniqueName: \"kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.448035 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.449457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.459387 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.463506 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfw75\" (UniqueName: \"kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.494088 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:05 crc kubenswrapper[4881]: I0121 11:22:05.399805 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f7mmp"] Jan 21 11:22:05 crc kubenswrapper[4881]: W0121 11:22:05.430106 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c22e38_1b3d_44b8_9519_0769200d708b.slice/crio-6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c WatchSource:0}: Error finding container 6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c: Status 404 returned error can't find the container with id 6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c Jan 21 11:22:06 crc kubenswrapper[4881]: I0121 11:22:06.502058 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" event={"ID":"16c22e38-1b3d-44b8-9519-0769200d708b","Type":"ContainerStarted","Data":"6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c"} Jan 21 11:22:06 crc kubenswrapper[4881]: I0121 11:22:06.979337 4881 trace.go:236] Trace[768352876]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/community-operators-bn24k" (21-Jan-2026 11:22:05.606) (total time: 1372ms): Jan 21 11:22:06 crc kubenswrapper[4881]: Trace[768352876]: [1.372653766s] [1.372653766s] END Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.720504 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerStarted","Data":"47cb2e9443fbe79dc10dfaee5ff0983a904efe0dfa8880c83f37fe646f71a44c"} Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.720948 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-central-agent" containerID="cri-o://05eebaca7eead0950dd873a8603c6201a9b2dc1e384271cdb00b8530ee218101" gracePeriod=30 Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.721229 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.721447 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="proxy-httpd" containerID="cri-o://47cb2e9443fbe79dc10dfaee5ff0983a904efe0dfa8880c83f37fe646f71a44c" gracePeriod=30 Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.721536 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="sg-core" containerID="cri-o://bf7c6034e2c42d9e693656ae69979f8a5455f71ca251857c2ffd4e50430c4b59" gracePeriod=30 Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.721553 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-notification-agent" containerID="cri-o://461544715f4a3f154544e0f37c4e4bbc147310a0bd62815eae5302504de75f07" gracePeriod=30 Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.753332 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.022889993 podStartE2EDuration="10.75330795s" podCreationTimestamp="2026-01-21 11:21:57 +0000 UTC" firstStartedPulling="2026-01-21 11:21:59.162053775 +0000 UTC m=+1506.422010244" lastFinishedPulling="2026-01-21 11:22:04.892471732 +0000 UTC m=+1512.152428201" observedRunningTime="2026-01-21 11:22:07.751433714 +0000 UTC m=+1515.011390183" watchObservedRunningTime="2026-01-21 11:22:07.75330795 +0000 UTC m=+1515.013264419" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.156197 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.156502 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.156605 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.156676 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.159084 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.356094 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.737529 4881 generic.go:334] "Generic (PLEG): container finished" podID="28ca8213-9b24-4785-9570-d2973570fbdc" containerID="47cb2e9443fbe79dc10dfaee5ff0983a904efe0dfa8880c83f37fe646f71a44c" exitCode=0 Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.737569 4881 generic.go:334] "Generic (PLEG): container finished" podID="28ca8213-9b24-4785-9570-d2973570fbdc" containerID="bf7c6034e2c42d9e693656ae69979f8a5455f71ca251857c2ffd4e50430c4b59" exitCode=2 Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.737579 4881 generic.go:334] "Generic (PLEG): container finished" podID="28ca8213-9b24-4785-9570-d2973570fbdc" containerID="461544715f4a3f154544e0f37c4e4bbc147310a0bd62815eae5302504de75f07" exitCode=0 Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.738937 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerDied","Data":"47cb2e9443fbe79dc10dfaee5ff0983a904efe0dfa8880c83f37fe646f71a44c"} Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.739025 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerDied","Data":"bf7c6034e2c42d9e693656ae69979f8a5455f71ca251857c2ffd4e50430c4b59"} Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.739046 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerDied","Data":"461544715f4a3f154544e0f37c4e4bbc147310a0bd62815eae5302504de75f07"} Jan 21 11:22:10 crc kubenswrapper[4881]: I0121 11:22:10.791937 4881 generic.go:334] "Generic (PLEG): container finished" podID="28ca8213-9b24-4785-9570-d2973570fbdc" containerID="05eebaca7eead0950dd873a8603c6201a9b2dc1e384271cdb00b8530ee218101" exitCode=0 Jan 21 11:22:10 crc kubenswrapper[4881]: I0121 11:22:10.792469 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerDied","Data":"05eebaca7eead0950dd873a8603c6201a9b2dc1e384271cdb00b8530ee218101"} Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.078574 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.165873 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166028 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166060 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166129 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166164 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzvfr\" (UniqueName: \"kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166270 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166319 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.168040 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.177210 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.191047 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts" (OuterVolumeSpecName: "scripts") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.191288 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr" (OuterVolumeSpecName: "kube-api-access-gzvfr") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "kube-api-access-gzvfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.265823 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.269570 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.269614 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzvfr\" (UniqueName: \"kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.269629 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.269644 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.269657 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.308147 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.364406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data" (OuterVolumeSpecName: "config-data") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.372278 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.372312 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.808530 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerDied","Data":"84387e9f1eda4be1e2e13f245e7866daad306dd7bc81eda92adfe5267e83ba52"} Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.808596 4881 scope.go:117] "RemoveContainer" containerID="47cb2e9443fbe79dc10dfaee5ff0983a904efe0dfa8880c83f37fe646f71a44c" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.808607 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.860849 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.878660 4881 scope.go:117] "RemoveContainer" containerID="bf7c6034e2c42d9e693656ae69979f8a5455f71ca251857c2ffd4e50430c4b59" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.886869 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.900639 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:11 crc kubenswrapper[4881]: E0121 11:22:11.901341 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-notification-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901359 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-notification-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: E0121 11:22:11.901373 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="proxy-httpd" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901381 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="proxy-httpd" Jan 21 11:22:11 crc kubenswrapper[4881]: E0121 11:22:11.901407 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="sg-core" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901415 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="sg-core" Jan 21 11:22:11 crc kubenswrapper[4881]: E0121 11:22:11.901439 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-central-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901447 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-central-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901704 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="sg-core" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901729 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-central-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901744 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-notification-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901753 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="proxy-httpd" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.907006 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.910885 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.911145 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.915058 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.922423 4881 scope.go:117] "RemoveContainer" containerID="461544715f4a3f154544e0f37c4e4bbc147310a0bd62815eae5302504de75f07" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.964662 4881 scope.go:117] "RemoveContainer" containerID="05eebaca7eead0950dd873a8603c6201a9b2dc1e384271cdb00b8530ee218101" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.989330 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.989609 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.989804 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.989930 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.990207 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.990374 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8lc5\" (UniqueName: \"kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.990523 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093205 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093287 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093324 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093395 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093443 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8lc5\" (UniqueName: \"kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093535 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.094001 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.094164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.095476 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.099220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.099480 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.100387 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.112520 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.116163 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8lc5\" (UniqueName: \"kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.245960 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:13 crc kubenswrapper[4881]: I0121 11:22:13.186885 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:13 crc kubenswrapper[4881]: I0121 11:22:13.346857 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" path="/var/lib/kubelet/pods/28ca8213-9b24-4785-9570-d2973570fbdc/volumes" Jan 21 11:22:13 crc kubenswrapper[4881]: I0121 11:22:13.839821 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerStarted","Data":"503a25d56c550049491832816edbc48c05afa818af9138db9e45c13fbbda3c04"} Jan 21 11:22:14 crc kubenswrapper[4881]: I0121 11:22:14.362271 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:22 crc kubenswrapper[4881]: I0121 11:22:22.049297 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" event={"ID":"16c22e38-1b3d-44b8-9519-0769200d708b","Type":"ContainerStarted","Data":"45d2c9cf95b1e6ab35e425681a61a8e4775263f35ab1c8463912de139e00b535"} Jan 21 11:22:22 crc kubenswrapper[4881]: I0121 11:22:22.052164 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerStarted","Data":"c587b5f1d4ce6bd63009ab70ac3c2d60e9a361552ad74baf6eee5e9cbaf12b08"} Jan 21 11:22:22 crc kubenswrapper[4881]: I0121 11:22:22.067192 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" podStartSLOduration=1.874513886 podStartE2EDuration="18.067170307s" podCreationTimestamp="2026-01-21 11:22:04 +0000 UTC" firstStartedPulling="2026-01-21 11:22:05.43555418 +0000 UTC m=+1512.695510649" lastFinishedPulling="2026-01-21 11:22:21.628210601 +0000 UTC m=+1528.888167070" observedRunningTime="2026-01-21 11:22:22.062621634 +0000 UTC m=+1529.322578103" watchObservedRunningTime="2026-01-21 11:22:22.067170307 +0000 UTC m=+1529.327126776" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.118174 4881 generic.go:334] "Generic (PLEG): container finished" podID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerID="4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" exitCode=137 Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.118862 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerDied","Data":"4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d"} Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.118924 4881 scope.go:117] "RemoveContainer" containerID="5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.130885 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerStarted","Data":"9f19d662dd7c7d2e019ff9b54fc69e7ca9f3be17c295e4af48f920e1e9ca9860"} Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.292092 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.493049 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs\") pod \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.493456 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wffxr\" (UniqueName: \"kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr\") pod \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.493523 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca\") pod \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.493605 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle\") pod \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.493800 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data\") pod \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.495074 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs" (OuterVolumeSpecName: "logs") pod "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" (UID: "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.502958 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr" (OuterVolumeSpecName: "kube-api-access-wffxr") pod "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" (UID: "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e"). InnerVolumeSpecName "kube-api-access-wffxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.538894 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" (UID: "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.542373 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" (UID: "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.577168 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data" (OuterVolumeSpecName: "config-data") pod "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" (UID: "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.599296 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wffxr\" (UniqueName: \"kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.599589 4881 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.599657 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.599725 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.599830 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.144734 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerStarted","Data":"e43c16a8d49069db18e2f00c6f35aa7e319b33e147379724b98cc6a207964853"} Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.146862 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerDied","Data":"29d3adbd836eae43fe470435c7cc82a51d0ed6187ef1f30da41d37c41cb401fb"} Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.146918 4881 scope.go:117] "RemoveContainer" containerID="4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.147059 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.198003 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.212877 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.228136 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:22:24 crc kubenswrapper[4881]: E0121 11:22:24.229856 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.229884 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: E0121 11:22:24.229905 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.229913 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: E0121 11:22:24.229930 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.229938 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.230248 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.230260 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.230278 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.231035 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.234626 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.248583 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.317231 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-logs\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.318035 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ns2m\" (UniqueName: \"kubernetes.io/projected/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-kube-api-access-7ns2m\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.318385 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.318545 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.318704 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.421407 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-logs\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.421521 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ns2m\" (UniqueName: \"kubernetes.io/projected/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-kube-api-access-7ns2m\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.421622 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.421648 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.421676 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.425504 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-logs\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.438430 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.438603 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.439249 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.444843 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ns2m\" (UniqueName: \"kubernetes.io/projected/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-kube-api-access-7ns2m\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.553898 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:25 crc kubenswrapper[4881]: I0121 11:22:25.152615 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:22:26 crc kubenswrapper[4881]: I0121 11:22:26.034939 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" path="/var/lib/kubelet/pods/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e/volumes" Jan 21 11:22:26 crc kubenswrapper[4881]: I0121 11:22:26.178729 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1a227ee4-7a4c-4cb6-991c-d137119a2a6e","Type":"ContainerStarted","Data":"6fad4b4fe9a8836c203f47f9b07542d89a464d477f7736896f152c617459d659"} Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.190735 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1a227ee4-7a4c-4cb6-991c-d137119a2a6e","Type":"ContainerStarted","Data":"856f738a7852caad106da5e207aa3fbda01bc189067e48decf62dedbc4c6c6c1"} Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194178 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerStarted","Data":"c4d1c7b32460d66f1d454b2f673559cd15c9520eb920941f2f0afa5d440392f4"} Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194379 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="proxy-httpd" containerID="cri-o://c4d1c7b32460d66f1d454b2f673559cd15c9520eb920941f2f0afa5d440392f4" gracePeriod=30 Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194404 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-notification-agent" containerID="cri-o://9f19d662dd7c7d2e019ff9b54fc69e7ca9f3be17c295e4af48f920e1e9ca9860" gracePeriod=30 Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194509 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="sg-core" containerID="cri-o://e43c16a8d49069db18e2f00c6f35aa7e319b33e147379724b98cc6a207964853" gracePeriod=30 Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194595 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194324 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-central-agent" containerID="cri-o://c587b5f1d4ce6bd63009ab70ac3c2d60e9a361552ad74baf6eee5e9cbaf12b08" gracePeriod=30 Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.234600 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.234578709 podStartE2EDuration="3.234578709s" podCreationTimestamp="2026-01-21 11:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:22:27.227958854 +0000 UTC m=+1534.487915323" watchObservedRunningTime="2026-01-21 11:22:27.234578709 +0000 UTC m=+1534.494535198" Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.287554 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.7941580550000005 podStartE2EDuration="16.287534467s" podCreationTimestamp="2026-01-21 11:22:11 +0000 UTC" firstStartedPulling="2026-01-21 11:22:13.196539078 +0000 UTC m=+1520.456495547" lastFinishedPulling="2026-01-21 11:22:24.68991549 +0000 UTC m=+1531.949871959" observedRunningTime="2026-01-21 11:22:27.283023205 +0000 UTC m=+1534.542979664" watchObservedRunningTime="2026-01-21 11:22:27.287534467 +0000 UTC m=+1534.547490936" Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.207838 4881 generic.go:334] "Generic (PLEG): container finished" podID="864daf3b-9b84-4a77-b70d-7574975a1759" containerID="c4d1c7b32460d66f1d454b2f673559cd15c9520eb920941f2f0afa5d440392f4" exitCode=0 Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.208132 4881 generic.go:334] "Generic (PLEG): container finished" podID="864daf3b-9b84-4a77-b70d-7574975a1759" containerID="e43c16a8d49069db18e2f00c6f35aa7e319b33e147379724b98cc6a207964853" exitCode=2 Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.208141 4881 generic.go:334] "Generic (PLEG): container finished" podID="864daf3b-9b84-4a77-b70d-7574975a1759" containerID="9f19d662dd7c7d2e019ff9b54fc69e7ca9f3be17c295e4af48f920e1e9ca9860" exitCode=0 Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.208054 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerDied","Data":"c4d1c7b32460d66f1d454b2f673559cd15c9520eb920941f2f0afa5d440392f4"} Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.208241 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerDied","Data":"e43c16a8d49069db18e2f00c6f35aa7e319b33e147379724b98cc6a207964853"} Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.208267 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerDied","Data":"9f19d662dd7c7d2e019ff9b54fc69e7ca9f3be17c295e4af48f920e1e9ca9860"} Jan 21 11:22:34 crc kubenswrapper[4881]: I0121 11:22:34.555148 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:34 crc kubenswrapper[4881]: I0121 11:22:34.586360 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:35 crc kubenswrapper[4881]: I0121 11:22:35.352366 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:35 crc kubenswrapper[4881]: I0121 11:22:35.382363 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:36 crc kubenswrapper[4881]: I0121 11:22:36.365729 4881 generic.go:334] "Generic (PLEG): container finished" podID="864daf3b-9b84-4a77-b70d-7574975a1759" containerID="c587b5f1d4ce6bd63009ab70ac3c2d60e9a361552ad74baf6eee5e9cbaf12b08" exitCode=0 Jan 21 11:22:36 crc kubenswrapper[4881]: I0121 11:22:36.365963 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerDied","Data":"c587b5f1d4ce6bd63009ab70ac3c2d60e9a361552ad74baf6eee5e9cbaf12b08"} Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.074857 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.134738 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.134824 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.134905 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.134997 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.135088 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.135288 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.135322 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8lc5\" (UniqueName: \"kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.139525 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.141197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.141770 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts" (OuterVolumeSpecName: "scripts") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.146009 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5" (OuterVolumeSpecName: "kube-api-access-h8lc5") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "kube-api-access-h8lc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.166704 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.238302 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.238345 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.238359 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.238371 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8lc5\" (UniqueName: \"kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.238384 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.240533 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.273028 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data" (OuterVolumeSpecName: "config-data") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.340885 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.340918 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.398918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerDied","Data":"503a25d56c550049491832816edbc48c05afa818af9138db9e45c13fbbda3c04"} Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.398980 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.398997 4881 scope.go:117] "RemoveContainer" containerID="c4d1c7b32460d66f1d454b2f673559cd15c9520eb920941f2f0afa5d440392f4" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.440982 4881 scope.go:117] "RemoveContainer" containerID="e43c16a8d49069db18e2f00c6f35aa7e319b33e147379724b98cc6a207964853" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.444325 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.476910 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.490103 4881 scope.go:117] "RemoveContainer" containerID="9f19d662dd7c7d2e019ff9b54fc69e7ca9f3be17c295e4af48f920e1e9ca9860" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.494572 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:37 crc kubenswrapper[4881]: E0121 11:22:37.495186 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-central-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.495215 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-central-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: E0121 11:22:37.495241 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="sg-core" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.495250 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="sg-core" Jan 21 11:22:37 crc kubenswrapper[4881]: E0121 11:22:37.495302 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-notification-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.495313 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-notification-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: E0121 11:22:37.495346 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.495382 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:37 crc kubenswrapper[4881]: E0121 11:22:37.495404 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="proxy-httpd" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.495412 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="proxy-httpd" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.496555 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="proxy-httpd" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.496582 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.496809 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-central-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.496844 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-notification-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.496862 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="sg-core" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.499726 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.503883 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.504157 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.506590 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.524675 4881 scope.go:117] "RemoveContainer" containerID="c587b5f1d4ce6bd63009ab70ac3c2d60e9a361552ad74baf6eee5e9cbaf12b08" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.546411 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.546570 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.546708 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.546842 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgzxk\" (UniqueName: \"kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.546987 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.547132 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.547272 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650189 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650237 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650282 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650312 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650385 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650426 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgzxk\" (UniqueName: \"kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650890 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.652008 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.655355 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.655666 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.656704 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.656984 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.672217 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgzxk\" (UniqueName: \"kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.822905 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:38 crc kubenswrapper[4881]: I0121 11:22:38.299701 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:38 crc kubenswrapper[4881]: W0121 11:22:38.311922 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20eeb602_9c98_48ed_a9c9_22121156e8cb.slice/crio-98b63a4387f707fe8989f7007a02efb416a3ce182b681d864a6fffaef05cd43d WatchSource:0}: Error finding container 98b63a4387f707fe8989f7007a02efb416a3ce182b681d864a6fffaef05cd43d: Status 404 returned error can't find the container with id 98b63a4387f707fe8989f7007a02efb416a3ce182b681d864a6fffaef05cd43d Jan 21 11:22:38 crc kubenswrapper[4881]: I0121 11:22:38.415089 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerStarted","Data":"98b63a4387f707fe8989f7007a02efb416a3ce182b681d864a6fffaef05cd43d"} Jan 21 11:22:39 crc kubenswrapper[4881]: I0121 11:22:39.711379 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" path="/var/lib/kubelet/pods/864daf3b-9b84-4a77-b70d-7574975a1759/volumes" Jan 21 11:22:39 crc kubenswrapper[4881]: I0121 11:22:39.747176 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerStarted","Data":"f833baf807f57255c45be1ba58cccaca032385ccba346e4fc3846694862bc6ee"} Jan 21 11:22:39 crc kubenswrapper[4881]: I0121 11:22:39.747220 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerStarted","Data":"8256e63406ff9c5a7c526341a649b275e3f5ab402c57f45ac53e47b1d11393f9"} Jan 21 11:22:40 crc kubenswrapper[4881]: I0121 11:22:40.760774 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerStarted","Data":"19d2c0708e63a625c9564d43bfbff6b4bf382eb29c4f5fe75600d774080fe1d6"} Jan 21 11:22:40 crc kubenswrapper[4881]: I0121 11:22:40.766205 4881 generic.go:334] "Generic (PLEG): container finished" podID="16c22e38-1b3d-44b8-9519-0769200d708b" containerID="45d2c9cf95b1e6ab35e425681a61a8e4775263f35ab1c8463912de139e00b535" exitCode=0 Jan 21 11:22:40 crc kubenswrapper[4881]: I0121 11:22:40.766254 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" event={"ID":"16c22e38-1b3d-44b8-9519-0769200d708b","Type":"ContainerDied","Data":"45d2c9cf95b1e6ab35e425681a61a8e4775263f35ab1c8463912de139e00b535"} Jan 21 11:22:41 crc kubenswrapper[4881]: I0121 11:22:41.780897 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerStarted","Data":"ebf63005cec886f7073127e6f8a1b1d91309382b4d83ebbd9aca189eabae9b37"} Jan 21 11:22:41 crc kubenswrapper[4881]: I0121 11:22:41.781207 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:22:41 crc kubenswrapper[4881]: I0121 11:22:41.821722 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.896080456 podStartE2EDuration="4.821702997s" podCreationTimestamp="2026-01-21 11:22:37 +0000 UTC" firstStartedPulling="2026-01-21 11:22:38.315015642 +0000 UTC m=+1545.574972111" lastFinishedPulling="2026-01-21 11:22:41.240638193 +0000 UTC m=+1548.500594652" observedRunningTime="2026-01-21 11:22:41.815247257 +0000 UTC m=+1549.075203736" watchObservedRunningTime="2026-01-21 11:22:41.821702997 +0000 UTC m=+1549.081659456" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.213153 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.371577 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data\") pod \"16c22e38-1b3d-44b8-9519-0769200d708b\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.371993 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts\") pod \"16c22e38-1b3d-44b8-9519-0769200d708b\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.372045 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfw75\" (UniqueName: \"kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75\") pod \"16c22e38-1b3d-44b8-9519-0769200d708b\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.372076 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle\") pod \"16c22e38-1b3d-44b8-9519-0769200d708b\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.377988 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts" (OuterVolumeSpecName: "scripts") pod "16c22e38-1b3d-44b8-9519-0769200d708b" (UID: "16c22e38-1b3d-44b8-9519-0769200d708b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.378828 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75" (OuterVolumeSpecName: "kube-api-access-vfw75") pod "16c22e38-1b3d-44b8-9519-0769200d708b" (UID: "16c22e38-1b3d-44b8-9519-0769200d708b"). InnerVolumeSpecName "kube-api-access-vfw75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.405737 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16c22e38-1b3d-44b8-9519-0769200d708b" (UID: "16c22e38-1b3d-44b8-9519-0769200d708b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.410660 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data" (OuterVolumeSpecName: "config-data") pod "16c22e38-1b3d-44b8-9519-0769200d708b" (UID: "16c22e38-1b3d-44b8-9519-0769200d708b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.475148 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.475193 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.475208 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfw75\" (UniqueName: \"kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.475222 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.794708 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.794768 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" event={"ID":"16c22e38-1b3d-44b8-9519-0769200d708b","Type":"ContainerDied","Data":"6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c"} Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.794818 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.921365 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 11:22:42 crc kubenswrapper[4881]: E0121 11:22:42.922006 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c22e38-1b3d-44b8-9519-0769200d708b" containerName="nova-cell0-conductor-db-sync" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.922030 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c22e38-1b3d-44b8-9519-0769200d708b" containerName="nova-cell0-conductor-db-sync" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.922283 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c22e38-1b3d-44b8-9519-0769200d708b" containerName="nova-cell0-conductor-db-sync" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.923299 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.928453 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.928543 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fjj24" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.933996 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.089196 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bm5d\" (UniqueName: \"kubernetes.io/projected/dc5fb029-b5fa-4065-adb2-af2e634785fc-kube-api-access-5bm5d\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.089586 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.089669 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.192771 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.193186 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.193431 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bm5d\" (UniqueName: \"kubernetes.io/projected/dc5fb029-b5fa-4065-adb2-af2e634785fc-kube-api-access-5bm5d\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.196934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.197152 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.210622 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bm5d\" (UniqueName: \"kubernetes.io/projected/dc5fb029-b5fa-4065-adb2-af2e634785fc-kube-api-access-5bm5d\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.251515 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:44 crc kubenswrapper[4881]: I0121 11:22:44.031722 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 11:22:44 crc kubenswrapper[4881]: I0121 11:22:44.815434 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dc5fb029-b5fa-4065-adb2-af2e634785fc","Type":"ContainerStarted","Data":"78e76eaf0f1c596c93d80443dc862532d9aec8c20fa4611433d0d4e887f066ae"} Jan 21 11:22:46 crc kubenswrapper[4881]: I0121 11:22:46.907200 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dc5fb029-b5fa-4065-adb2-af2e634785fc","Type":"ContainerStarted","Data":"f1bfaecb54264853b5148d400e3526c63e010da6d27ad91e1985d00445cde11c"} Jan 21 11:22:46 crc kubenswrapper[4881]: I0121 11:22:46.908576 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:46 crc kubenswrapper[4881]: I0121 11:22:46.934702 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=4.934683193 podStartE2EDuration="4.934683193s" podCreationTimestamp="2026-01-21 11:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:22:46.932600552 +0000 UTC m=+1554.192557021" watchObservedRunningTime="2026-01-21 11:22:46.934683193 +0000 UTC m=+1554.194639662" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.283675 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.829977 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qgqh7"] Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.831438 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.833550 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.834695 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.855117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.855206 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.855288 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c6l8\" (UniqueName: \"kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.855358 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.856998 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgqh7"] Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.956206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c6l8\" (UniqueName: \"kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.956492 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.956566 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.956625 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.962680 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.963471 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.966109 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.987432 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c6l8\" (UniqueName: \"kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.038219 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.040669 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.042948 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.058602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.058641 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lqtx\" (UniqueName: \"kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.058667 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.058802 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.075888 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.092397 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.094435 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.098158 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.134690 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.136590 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.146328 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.152620 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.161069 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.165212 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.165269 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lqtx\" (UniqueName: \"kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.165337 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.165425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.166040 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.175008 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.188510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.203371 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lqtx\" (UniqueName: \"kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.204645 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.267480 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.267524 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.267556 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz64s\" (UniqueName: \"kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.267587 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.268470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.268578 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vb4w\" (UniqueName: \"kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.370347 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz64s\" (UniqueName: \"kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.370701 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.370893 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.370925 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vb4w\" (UniqueName: \"kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.371017 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.371047 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.374632 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.380205 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.382219 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.402388 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.406091 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.419858 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.425936 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.464150 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz64s\" (UniqueName: \"kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.464654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.465265 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vb4w\" (UniqueName: \"kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.465966 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.467024 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.487328 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.487446 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.487627 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.487844 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st84x\" (UniqueName: \"kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.578435 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.580893 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.592511 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.592596 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.592673 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.592772 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st84x\" (UniqueName: \"kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.593664 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.598640 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.599626 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.601510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.601740 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.614631 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st84x\" (UniqueName: \"kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.695164 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.695483 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.695508 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.695534 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.695566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.697669 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhq6\" (UniqueName: \"kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805479 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prhq6\" (UniqueName: \"kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805557 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805594 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805628 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805708 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.807127 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.807915 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.808411 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.808932 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.810569 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.827604 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.838529 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prhq6\" (UniqueName: \"kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.915244 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.932362 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgqh7"] Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.043546 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgqh7" event={"ID":"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad","Type":"ContainerStarted","Data":"c99b0af6c38bc6fdca36563516e7441a2da5b379535ed6ab05553b2802c64c82"} Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.464194 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sf7xj"] Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.467158 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.478381 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.478539 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.504311 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sf7xj"] Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.530768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.530837 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwfp\" (UniqueName: \"kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.530860 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.530955 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.593964 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.634317 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.635005 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.635076 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtwfp\" (UniqueName: \"kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.635133 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.641777 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.642092 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.643366 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.655457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtwfp\" (UniqueName: \"kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.724335 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:22:55 crc kubenswrapper[4881]: W0121 11:22:55.726930 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf39b23f8_2c7e_46d6_8e59_7980b1d2c27c.slice/crio-a5806f41aee852119e408747b6a9159dc66b4ea14896033d8861a45a5e319518 WatchSource:0}: Error finding container a5806f41aee852119e408747b6a9159dc66b4ea14896033d8861a45a5e319518: Status 404 returned error can't find the container with id a5806f41aee852119e408747b6a9159dc66b4ea14896033d8861a45a5e319518 Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.738449 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.804577 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.944034 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.116252 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerStarted","Data":"19360c29690f2d877803b5397f0b64081dcdd4e4fc63374ceab9aad4daa3f1c3"} Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.131773 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerStarted","Data":"a5806f41aee852119e408747b6a9159dc66b4ea14896033d8861a45a5e319518"} Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.140828 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.150264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50ff1a29-d6ee-4911-bb22-165aca6d8605","Type":"ContainerStarted","Data":"6aaf4e142828aa790e377df87440347084937144bb74fce4d8edde8de8915f28"} Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.177363 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgqh7" event={"ID":"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad","Type":"ContainerStarted","Data":"0055b21217090cd15d9d0b17356b22b40f32a70cf1a35f1e9043b6cc9a7f1186"} Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.186530 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3345073b-8907-4de9-829f-73d8e79a01bb","Type":"ContainerStarted","Data":"b74119743bb7cd487418f8d001a744431b3d7a1804f43dd5e7dc76b033b63247"} Jan 21 11:22:56 crc kubenswrapper[4881]: W0121 11:22:56.205319 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod859758f9_0dc2_4397_a75a_b098eaabe613.slice/crio-f75b793fa7a8fa638c746656a34aafcf67f449119cc5beb64d5b0d6054ef7320 WatchSource:0}: Error finding container f75b793fa7a8fa638c746656a34aafcf67f449119cc5beb64d5b0d6054ef7320: Status 404 returned error can't find the container with id f75b793fa7a8fa638c746656a34aafcf67f449119cc5beb64d5b0d6054ef7320 Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.734654 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qgqh7" podStartSLOduration=3.734630157 podStartE2EDuration="3.734630157s" podCreationTimestamp="2026-01-21 11:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:22:56.231534024 +0000 UTC m=+1563.491490503" watchObservedRunningTime="2026-01-21 11:22:56.734630157 +0000 UTC m=+1563.994586636" Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.737385 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sf7xj"] Jan 21 11:22:56 crc kubenswrapper[4881]: W0121 11:22:56.748247 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod813d73da_18da_40fa_b949_bbeec6604ac9.slice/crio-27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb WatchSource:0}: Error finding container 27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb: Status 404 returned error can't find the container with id 27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb Jan 21 11:22:56 crc kubenswrapper[4881]: E0121 11:22:56.867343 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod859758f9_0dc2_4397_a75a_b098eaabe613.slice/crio-conmon-14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod859758f9_0dc2_4397_a75a_b098eaabe613.slice/crio-14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.209166 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" event={"ID":"813d73da-18da-40fa-b949-bbeec6604ac9","Type":"ContainerStarted","Data":"02004fbf2f26b53236286799b468ab78450f8557fc37a01d6e78bf2e7876befc"} Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.209502 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" event={"ID":"813d73da-18da-40fa-b949-bbeec6604ac9","Type":"ContainerStarted","Data":"27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb"} Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.218171 4881 generic.go:334] "Generic (PLEG): container finished" podID="859758f9-0dc2-4397-a75a-b098eaabe613" containerID="14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f" exitCode=0 Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.220479 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" event={"ID":"859758f9-0dc2-4397-a75a-b098eaabe613","Type":"ContainerDied","Data":"14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f"} Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.220529 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" event={"ID":"859758f9-0dc2-4397-a75a-b098eaabe613","Type":"ContainerStarted","Data":"f75b793fa7a8fa638c746656a34aafcf67f449119cc5beb64d5b0d6054ef7320"} Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.235233 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" podStartSLOduration=2.2352118 podStartE2EDuration="2.2352118s" podCreationTimestamp="2026-01-21 11:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:22:57.22830879 +0000 UTC m=+1564.488265259" watchObservedRunningTime="2026-01-21 11:22:57.2352118 +0000 UTC m=+1564.495168269" Jan 21 11:22:58 crc kubenswrapper[4881]: I0121 11:22:58.628207 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:22:58 crc kubenswrapper[4881]: I0121 11:22:58.660478 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.284154 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" event={"ID":"859758f9-0dc2-4397-a75a-b098eaabe613","Type":"ContainerStarted","Data":"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.284647 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.289372 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50ff1a29-d6ee-4911-bb22-165aca6d8605","Type":"ContainerStarted","Data":"9d3665845c2c2c09903d0aa16a7538de5b4dcf05cef7d82865d9c9d446cdaf41"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.289415 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="50ff1a29-d6ee-4911-bb22-165aca6d8605" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9d3665845c2c2c09903d0aa16a7538de5b4dcf05cef7d82865d9c9d446cdaf41" gracePeriod=30 Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.297565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3345073b-8907-4de9-829f-73d8e79a01bb","Type":"ContainerStarted","Data":"574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.300204 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerStarted","Data":"d5d6be9da18cdb336cad44c85f030f31c3a241f6234a1b668281031e8ffb56ec"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.300246 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerStarted","Data":"000840a5458dc374424237a1e0edaa7bc61f3e5c2c1a3524dfdcefbcaa258c53"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.300420 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-log" containerID="cri-o://000840a5458dc374424237a1e0edaa7bc61f3e5c2c1a3524dfdcefbcaa258c53" gracePeriod=30 Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.300491 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-metadata" containerID="cri-o://d5d6be9da18cdb336cad44c85f030f31c3a241f6234a1b668281031e8ffb56ec" gracePeriod=30 Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.306985 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerStarted","Data":"b1e94b3b719b1a2213452fd275be74fdb796e7c03d99fa5695466085e68a91fd"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.307326 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerStarted","Data":"71bf37a912cb19763de6a839082bf72ecae64d550a077ed5461e0d2fa0d9be80"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.309967 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" podStartSLOduration=7.309948371 podStartE2EDuration="7.309948371s" podCreationTimestamp="2026-01-21 11:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:01.309932631 +0000 UTC m=+1568.569889110" watchObservedRunningTime="2026-01-21 11:23:01.309948371 +0000 UTC m=+1568.569904840" Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.343384 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.731849469 podStartE2EDuration="7.34335909s" podCreationTimestamp="2026-01-21 11:22:54 +0000 UTC" firstStartedPulling="2026-01-21 11:22:55.728730318 +0000 UTC m=+1562.988686787" lastFinishedPulling="2026-01-21 11:23:00.340239939 +0000 UTC m=+1567.600196408" observedRunningTime="2026-01-21 11:23:01.327710017 +0000 UTC m=+1568.587666506" watchObservedRunningTime="2026-01-21 11:23:01.34335909 +0000 UTC m=+1568.603315559" Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.395830 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.780540632 podStartE2EDuration="8.395807394s" podCreationTimestamp="2026-01-21 11:22:53 +0000 UTC" firstStartedPulling="2026-01-21 11:22:55.730361548 +0000 UTC m=+1562.990318017" lastFinishedPulling="2026-01-21 11:23:00.34562831 +0000 UTC m=+1567.605584779" observedRunningTime="2026-01-21 11:23:01.355086697 +0000 UTC m=+1568.615043156" watchObservedRunningTime="2026-01-21 11:23:01.395807394 +0000 UTC m=+1568.655763873" Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.403659 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.672504875 podStartE2EDuration="7.403635086s" podCreationTimestamp="2026-01-21 11:22:54 +0000 UTC" firstStartedPulling="2026-01-21 11:22:55.608485832 +0000 UTC m=+1562.868442301" lastFinishedPulling="2026-01-21 11:23:00.339616043 +0000 UTC m=+1567.599572512" observedRunningTime="2026-01-21 11:23:01.375735773 +0000 UTC m=+1568.635692242" watchObservedRunningTime="2026-01-21 11:23:01.403635086 +0000 UTC m=+1568.663591555" Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.428664 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.045126613 podStartE2EDuration="7.428644819s" podCreationTimestamp="2026-01-21 11:22:54 +0000 UTC" firstStartedPulling="2026-01-21 11:22:55.954703693 +0000 UTC m=+1563.214660162" lastFinishedPulling="2026-01-21 11:23:00.338221899 +0000 UTC m=+1567.598178368" observedRunningTime="2026-01-21 11:23:01.404936608 +0000 UTC m=+1568.664893077" watchObservedRunningTime="2026-01-21 11:23:01.428644819 +0000 UTC m=+1568.688601288" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.556909 4881 generic.go:334] "Generic (PLEG): container finished" podID="d3527c16-7547-4e37-bcda-452193c45fee" containerID="d5d6be9da18cdb336cad44c85f030f31c3a241f6234a1b668281031e8ffb56ec" exitCode=0 Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.557583 4881 generic.go:334] "Generic (PLEG): container finished" podID="d3527c16-7547-4e37-bcda-452193c45fee" containerID="000840a5458dc374424237a1e0edaa7bc61f3e5c2c1a3524dfdcefbcaa258c53" exitCode=143 Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.557116 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerDied","Data":"d5d6be9da18cdb336cad44c85f030f31c3a241f6234a1b668281031e8ffb56ec"} Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.558776 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerDied","Data":"000840a5458dc374424237a1e0edaa7bc61f3e5c2c1a3524dfdcefbcaa258c53"} Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.558811 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerDied","Data":"19360c29690f2d877803b5397f0b64081dcdd4e4fc63374ceab9aad4daa3f1c3"} Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.558830 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19360c29690f2d877803b5397f0b64081dcdd4e4fc63374ceab9aad4daa3f1c3" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.563104 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.612467 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle\") pod \"d3527c16-7547-4e37-bcda-452193c45fee\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.612634 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs\") pod \"d3527c16-7547-4e37-bcda-452193c45fee\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.613175 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs" (OuterVolumeSpecName: "logs") pod "d3527c16-7547-4e37-bcda-452193c45fee" (UID: "d3527c16-7547-4e37-bcda-452193c45fee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.613191 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st84x\" (UniqueName: \"kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x\") pod \"d3527c16-7547-4e37-bcda-452193c45fee\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.613358 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data\") pod \"d3527c16-7547-4e37-bcda-452193c45fee\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.614454 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.619744 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x" (OuterVolumeSpecName: "kube-api-access-st84x") pod "d3527c16-7547-4e37-bcda-452193c45fee" (UID: "d3527c16-7547-4e37-bcda-452193c45fee"). InnerVolumeSpecName "kube-api-access-st84x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.648947 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3527c16-7547-4e37-bcda-452193c45fee" (UID: "d3527c16-7547-4e37-bcda-452193c45fee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.659597 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data" (OuterVolumeSpecName: "config-data") pod "d3527c16-7547-4e37-bcda-452193c45fee" (UID: "d3527c16-7547-4e37-bcda-452193c45fee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.716186 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st84x\" (UniqueName: \"kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.716261 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.716277 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.570550 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.602163 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.616749 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.653489 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:03 crc kubenswrapper[4881]: E0121 11:23:03.654380 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-metadata" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.654410 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-metadata" Jan 21 11:23:03 crc kubenswrapper[4881]: E0121 11:23:03.654452 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-log" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.654461 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-log" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.654722 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-log" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.654749 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-metadata" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.656089 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.656204 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.661044 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.661240 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.839966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtx52\" (UniqueName: \"kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.840037 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.840382 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.840661 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.840866 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.942928 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.943083 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.943156 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.943228 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtx52\" (UniqueName: \"kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.943265 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.943842 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.951331 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.967722 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.967945 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.970850 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtx52\" (UniqueName: \"kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.981322 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.464758 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.467838 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.467879 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.468037 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.469578 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.529537 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.600987 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.602905 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerStarted","Data":"2c61e4a0cf50faebb3da795860373a82c98ee972146a5292709cde146a4a9c15"} Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.653299 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.334596 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3527c16-7547-4e37-bcda-452193c45fee" path="/var/lib/kubelet/pods/d3527c16-7547-4e37-bcda-452193c45fee/volumes" Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.550069 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.550084 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.616208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerStarted","Data":"04b14eafe282879a10a549256a83522f141403e701c9d0a5d0f5ea8746de26b5"} Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.616264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerStarted","Data":"cb3c8eb696c2d6f70dd5b7efed28b2b6d15d294b8d97901355bfdcf5ce7eaa3e"} Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.650361 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.65033599 podStartE2EDuration="2.65033599s" podCreationTimestamp="2026-01-21 11:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:05.649131721 +0000 UTC m=+1572.909088190" watchObservedRunningTime="2026-01-21 11:23:05.65033599 +0000 UTC m=+1572.910292459" Jan 21 11:23:07 crc kubenswrapper[4881]: I0121 11:23:07.831617 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 11:23:08 crc kubenswrapper[4881]: I0121 11:23:08.650325 4881 generic.go:334] "Generic (PLEG): container finished" podID="9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" containerID="0055b21217090cd15d9d0b17356b22b40f32a70cf1a35f1e9043b6cc9a7f1186" exitCode=0 Jan 21 11:23:08 crc kubenswrapper[4881]: I0121 11:23:08.650365 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgqh7" event={"ID":"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad","Type":"ContainerDied","Data":"0055b21217090cd15d9d0b17356b22b40f32a70cf1a35f1e9043b6cc9a7f1186"} Jan 21 11:23:08 crc kubenswrapper[4881]: I0121 11:23:08.981906 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:23:08 crc kubenswrapper[4881]: I0121 11:23:08.982158 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:23:09 crc kubenswrapper[4881]: I0121 11:23:09.918197 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.465544 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.465798 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c849cf559-fjllv" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="dnsmasq-dns" containerID="cri-o://520ec1cfcb7fa94d0057499475a0936b202225668f29de849ba69f710c127ead" gracePeriod=10 Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.676776 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgqh7" event={"ID":"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad","Type":"ContainerDied","Data":"c99b0af6c38bc6fdca36563516e7441a2da5b379535ed6ab05553b2802c64c82"} Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.676975 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c99b0af6c38bc6fdca36563516e7441a2da5b379535ed6ab05553b2802c64c82" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.679201 4881 generic.go:334] "Generic (PLEG): container finished" podID="813d73da-18da-40fa-b949-bbeec6604ac9" containerID="02004fbf2f26b53236286799b468ab78450f8557fc37a01d6e78bf2e7876befc" exitCode=0 Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.679336 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" event={"ID":"813d73da-18da-40fa-b949-bbeec6604ac9","Type":"ContainerDied","Data":"02004fbf2f26b53236286799b468ab78450f8557fc37a01d6e78bf2e7876befc"} Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.800940 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.926586 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c6l8\" (UniqueName: \"kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8\") pod \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.926721 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data\") pod \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.927016 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle\") pod \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.927240 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts\") pod \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.932796 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts" (OuterVolumeSpecName: "scripts") pod "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" (UID: "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.933706 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8" (OuterVolumeSpecName: "kube-api-access-5c6l8") pod "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" (UID: "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad"). InnerVolumeSpecName "kube-api-access-5c6l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.959327 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data" (OuterVolumeSpecName: "config-data") pod "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" (UID: "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.987904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" (UID: "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.030080 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.030510 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c6l8\" (UniqueName: \"kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.030598 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.030683 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.693411 4881 generic.go:334] "Generic (PLEG): container finished" podID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerID="520ec1cfcb7fa94d0057499475a0936b202225668f29de849ba69f710c127ead" exitCode=0 Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.693654 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c849cf559-fjllv" event={"ID":"4a89a9d0-4859-41cb-896d-f1a91e854d7b","Type":"ContainerDied","Data":"520ec1cfcb7fa94d0057499475a0936b202225668f29de849ba69f710c127ead"} Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.693736 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.996525 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.996820 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-log" containerID="cri-o://71bf37a912cb19763de6a839082bf72ecae64d550a077ed5461e0d2fa0d9be80" gracePeriod=30 Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.996980 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-api" containerID="cri-o://b1e94b3b719b1a2213452fd275be74fdb796e7c03d99fa5695466085e68a91fd" gracePeriod=30 Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.028854 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.029131 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" containerName="nova-scheduler-scheduler" containerID="cri-o://574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" gracePeriod=30 Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.051900 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.052166 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-log" containerID="cri-o://cb3c8eb696c2d6f70dd5b7efed28b2b6d15d294b8d97901355bfdcf5ce7eaa3e" gracePeriod=30 Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.052292 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-metadata" containerID="cri-o://04b14eafe282879a10a549256a83522f141403e701c9d0a5d0f5ea8746de26b5" gracePeriod=30 Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.350381 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.849491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" event={"ID":"813d73da-18da-40fa-b949-bbeec6604ac9","Type":"ContainerDied","Data":"27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb"} Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.849533 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.849594 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.890608 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts\") pod \"813d73da-18da-40fa-b949-bbeec6604ac9\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.890661 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle\") pod \"813d73da-18da-40fa-b949-bbeec6604ac9\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.890891 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data\") pod \"813d73da-18da-40fa-b949-bbeec6604ac9\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.891005 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtwfp\" (UniqueName: \"kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp\") pod \"813d73da-18da-40fa-b949-bbeec6604ac9\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.944863 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts" (OuterVolumeSpecName: "scripts") pod "813d73da-18da-40fa-b949-bbeec6604ac9" (UID: "813d73da-18da-40fa-b949-bbeec6604ac9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.944959 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp" (OuterVolumeSpecName: "kube-api-access-xtwfp") pod "813d73da-18da-40fa-b949-bbeec6604ac9" (UID: "813d73da-18da-40fa-b949-bbeec6604ac9"). InnerVolumeSpecName "kube-api-access-xtwfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.953857 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "813d73da-18da-40fa-b949-bbeec6604ac9" (UID: "813d73da-18da-40fa-b949-bbeec6604ac9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.991873 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data" (OuterVolumeSpecName: "config-data") pod "813d73da-18da-40fa-b949-bbeec6604ac9" (UID: "813d73da-18da-40fa-b949-bbeec6604ac9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.993962 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.993986 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtwfp\" (UniqueName: \"kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.993997 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.994005 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.366104 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513473 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513576 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd8b7\" (UniqueName: \"kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513630 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513754 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513844 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513893 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.552005 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7" (OuterVolumeSpecName: "kube-api-access-cd8b7") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "kube-api-access-cd8b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.622443 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd8b7\" (UniqueName: \"kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.628641 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.674173 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.687321 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config" (OuterVolumeSpecName: "config") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.697369 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.727674 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.727717 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.727727 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.727738 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.738513 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.829226 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.863243 4881 generic.go:334] "Generic (PLEG): container finished" podID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerID="71bf37a912cb19763de6a839082bf72ecae64d550a077ed5461e0d2fa0d9be80" exitCode=143 Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.863326 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerDied","Data":"71bf37a912cb19763de6a839082bf72ecae64d550a077ed5461e0d2fa0d9be80"} Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.865638 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c849cf559-fjllv" event={"ID":"4a89a9d0-4859-41cb-896d-f1a91e854d7b","Type":"ContainerDied","Data":"7d5f5a0fecb347a3031d8e9d038b27129aa5ce2b2e49dd11bb8a2bb4f461cdbf"} Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.865685 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.865715 4881 scope.go:117] "RemoveContainer" containerID="520ec1cfcb7fa94d0057499475a0936b202225668f29de849ba69f710c127ead" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.868291 4881 generic.go:334] "Generic (PLEG): container finished" podID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerID="04b14eafe282879a10a549256a83522f141403e701c9d0a5d0f5ea8746de26b5" exitCode=0 Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.868583 4881 generic.go:334] "Generic (PLEG): container finished" podID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerID="cb3c8eb696c2d6f70dd5b7efed28b2b6d15d294b8d97901355bfdcf5ce7eaa3e" exitCode=143 Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.868335 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerDied","Data":"04b14eafe282879a10a549256a83522f141403e701c9d0a5d0f5ea8746de26b5"} Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.868632 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerDied","Data":"cb3c8eb696c2d6f70dd5b7efed28b2b6d15d294b8d97901355bfdcf5ce7eaa3e"} Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.921431 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 11:23:13 crc kubenswrapper[4881]: E0121 11:23:13.922007 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813d73da-18da-40fa-b949-bbeec6604ac9" containerName="nova-cell1-conductor-db-sync" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922030 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="813d73da-18da-40fa-b949-bbeec6604ac9" containerName="nova-cell1-conductor-db-sync" Jan 21 11:23:13 crc kubenswrapper[4881]: E0121 11:23:13.922058 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="dnsmasq-dns" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922065 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="dnsmasq-dns" Jan 21 11:23:13 crc kubenswrapper[4881]: E0121 11:23:13.922080 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="init" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922086 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="init" Jan 21 11:23:13 crc kubenswrapper[4881]: E0121 11:23:13.922096 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" containerName="nova-manage" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922102 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" containerName="nova-manage" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922361 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="dnsmasq-dns" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922387 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" containerName="nova-manage" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922404 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="813d73da-18da-40fa-b949-bbeec6604ac9" containerName="nova-cell1-conductor-db-sync" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.923261 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.935304 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.939837 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.949853 4881 scope.go:117] "RemoveContainer" containerID="e80fa73fd255dd2a9302a2ee6b75f7b4cf8767d543328dc915247c69166c0c25" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.951066 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.982428 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.035102 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.035163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4l8z\" (UniqueName: \"kubernetes.io/projected/161c46d2-7b98-4a9e-a648-ce25b966f589-kube-api-access-q4l8z\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.036076 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.138072 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.138159 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.138183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4l8z\" (UniqueName: \"kubernetes.io/projected/161c46d2-7b98-4a9e-a648-ce25b966f589-kube-api-access-q4l8z\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.143770 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.148649 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.165757 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4l8z\" (UniqueName: \"kubernetes.io/projected/161c46d2-7b98-4a9e-a648-ce25b966f589-kube-api-access-q4l8z\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.281255 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: E0121 11:23:14.471390 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c is running failed: container process not found" containerID="574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 11:23:14 crc kubenswrapper[4881]: E0121 11:23:14.472043 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c is running failed: container process not found" containerID="574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 11:23:14 crc kubenswrapper[4881]: E0121 11:23:14.473490 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c is running failed: container process not found" containerID="574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 11:23:14 crc kubenswrapper[4881]: E0121 11:23:14.473528 4881 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" containerName="nova-scheduler-scheduler" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.486098 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.486320 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerName="kube-state-metrics" containerID="cri-o://af06053084a285bc01330cffd9858a387580ee179dad2789e77044a776e5acf8" gracePeriod=30 Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.858531 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.882976 4881 generic.go:334] "Generic (PLEG): container finished" podID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerID="af06053084a285bc01330cffd9858a387580ee179dad2789e77044a776e5acf8" exitCode=2 Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.883077 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c5b6c25e-e882-4ea4-a284-6f55bfe75093","Type":"ContainerDied","Data":"af06053084a285bc01330cffd9858a387580ee179dad2789e77044a776e5acf8"} Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.887711 4881 generic.go:334] "Generic (PLEG): container finished" podID="3345073b-8907-4de9-829f-73d8e79a01bb" containerID="574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" exitCode=0 Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.887773 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3345073b-8907-4de9-829f-73d8e79a01bb","Type":"ContainerDied","Data":"574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c"} Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.947837 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": dial tcp 10.217.0.112:8081: connect: connection refused" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.342836 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" path="/var/lib/kubelet/pods/4a89a9d0-4859-41cb-896d-f1a91e854d7b/volumes" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.401079 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.411526 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.472452 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data\") pod \"3c6ca904-2790-425f-81ac-37cdc543cf0f\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.472897 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle\") pod \"3c6ca904-2790-425f-81ac-37cdc543cf0f\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.472951 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtx52\" (UniqueName: \"kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52\") pod \"3c6ca904-2790-425f-81ac-37cdc543cf0f\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.473107 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs\") pod \"3c6ca904-2790-425f-81ac-37cdc543cf0f\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.473126 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs\") pod \"3c6ca904-2790-425f-81ac-37cdc543cf0f\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.478022 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs" (OuterVolumeSpecName: "logs") pod "3c6ca904-2790-425f-81ac-37cdc543cf0f" (UID: "3c6ca904-2790-425f-81ac-37cdc543cf0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.494286 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52" (OuterVolumeSpecName: "kube-api-access-dtx52") pod "3c6ca904-2790-425f-81ac-37cdc543cf0f" (UID: "3c6ca904-2790-425f-81ac-37cdc543cf0f"). InnerVolumeSpecName "kube-api-access-dtx52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.528887 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data" (OuterVolumeSpecName: "config-data") pod "3c6ca904-2790-425f-81ac-37cdc543cf0f" (UID: "3c6ca904-2790-425f-81ac-37cdc543cf0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.536360 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.539503 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c6ca904-2790-425f-81ac-37cdc543cf0f" (UID: "3c6ca904-2790-425f-81ac-37cdc543cf0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.575017 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data\") pod \"3345073b-8907-4de9-829f-73d8e79a01bb\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.575479 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle\") pod \"3345073b-8907-4de9-829f-73d8e79a01bb\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.575622 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vb4w\" (UniqueName: \"kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w\") pod \"3345073b-8907-4de9-829f-73d8e79a01bb\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.576295 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.576412 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.576511 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.576619 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtx52\" (UniqueName: \"kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.585053 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w" (OuterVolumeSpecName: "kube-api-access-5vb4w") pod "3345073b-8907-4de9-829f-73d8e79a01bb" (UID: "3345073b-8907-4de9-829f-73d8e79a01bb"). InnerVolumeSpecName "kube-api-access-5vb4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.593132 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3c6ca904-2790-425f-81ac-37cdc543cf0f" (UID: "3c6ca904-2790-425f-81ac-37cdc543cf0f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.602753 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data" (OuterVolumeSpecName: "config-data") pod "3345073b-8907-4de9-829f-73d8e79a01bb" (UID: "3345073b-8907-4de9-829f-73d8e79a01bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.644673 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3345073b-8907-4de9-829f-73d8e79a01bb" (UID: "3345073b-8907-4de9-829f-73d8e79a01bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.678098 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25992\" (UniqueName: \"kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992\") pod \"c5b6c25e-e882-4ea4-a284-6f55bfe75093\" (UID: \"c5b6c25e-e882-4ea4-a284-6f55bfe75093\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.679220 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.679329 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vb4w\" (UniqueName: \"kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.679426 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.679505 4881 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.686278 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992" (OuterVolumeSpecName: "kube-api-access-25992") pod "c5b6c25e-e882-4ea4-a284-6f55bfe75093" (UID: "c5b6c25e-e882-4ea4-a284-6f55bfe75093"). InnerVolumeSpecName "kube-api-access-25992". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.781700 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25992\" (UniqueName: \"kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.118258 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3345073b-8907-4de9-829f-73d8e79a01bb","Type":"ContainerDied","Data":"b74119743bb7cd487418f8d001a744431b3d7a1804f43dd5e7dc76b033b63247"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.118322 4881 scope.go:117] "RemoveContainer" containerID="574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.118477 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.144051 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"161c46d2-7b98-4a9e-a648-ce25b966f589","Type":"ContainerStarted","Data":"03979ebbf81c9d21976f0e3ca57a5ac30c3d37cb4b88415ec35bd982a6541479"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.144117 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"161c46d2-7b98-4a9e-a648-ce25b966f589","Type":"ContainerStarted","Data":"b758db0eba45c64b878abf4b0937e61b4ada35f40c8640d44e698e03acf155c4"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.145452 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.148932 4881 generic.go:334] "Generic (PLEG): container finished" podID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerID="b1e94b3b719b1a2213452fd275be74fdb796e7c03d99fa5695466085e68a91fd" exitCode=0 Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.149009 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerDied","Data":"b1e94b3b719b1a2213452fd275be74fdb796e7c03d99fa5695466085e68a91fd"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.150438 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerDied","Data":"2c61e4a0cf50faebb3da795860373a82c98ee972146a5292709cde146a4a9c15"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.150514 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.151609 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c5b6c25e-e882-4ea4-a284-6f55bfe75093","Type":"ContainerDied","Data":"a902e47db0ad78d4b1a0c530458a8cc5f24a6bbadf9cb6042572a73fad768c2d"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.151673 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.187345 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.187320867 podStartE2EDuration="3.187320867s" podCreationTimestamp="2026-01-21 11:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:16.181275579 +0000 UTC m=+1583.441232068" watchObservedRunningTime="2026-01-21 11:23:16.187320867 +0000 UTC m=+1583.447277336" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.227862 4881 scope.go:117] "RemoveContainer" containerID="04b14eafe282879a10a549256a83522f141403e701c9d0a5d0f5ea8746de26b5" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.262515 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.297675 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.309086 4881 scope.go:117] "RemoveContainer" containerID="cb3c8eb696c2d6f70dd5b7efed28b2b6d15d294b8d97901355bfdcf5ce7eaa3e" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.317035 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.336918 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: E0121 11:23:16.337688 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-metadata" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.337716 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-metadata" Jan 21 11:23:16 crc kubenswrapper[4881]: E0121 11:23:16.337765 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerName="kube-state-metrics" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.337773 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerName="kube-state-metrics" Jan 21 11:23:16 crc kubenswrapper[4881]: E0121 11:23:16.337815 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-log" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.337823 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-log" Jan 21 11:23:16 crc kubenswrapper[4881]: E0121 11:23:16.337837 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" containerName="nova-scheduler-scheduler" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.337844 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" containerName="nova-scheduler-scheduler" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.338077 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-log" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.338115 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" containerName="nova-scheduler-scheduler" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.338134 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerName="kube-state-metrics" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.338145 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-metadata" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.339362 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.340379 4881 scope.go:117] "RemoveContainer" containerID="af06053084a285bc01330cffd9858a387580ee179dad2789e77044a776e5acf8" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.350766 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.351063 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.351925 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.370335 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.381890 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.394017 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.403241 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.403284 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf49q\" (UniqueName: \"kubernetes.io/projected/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-api-access-pf49q\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.403392 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.403481 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.406936 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.408676 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.413190 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.417217 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.422481 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.424090 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.426089 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.437889 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.463768 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.505016 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpprt\" (UniqueName: \"kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.505979 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506056 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmm59\" (UniqueName: \"kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506088 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506220 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf49q\" (UniqueName: \"kubernetes.io/projected/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-api-access-pf49q\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506273 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506401 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506433 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506514 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506580 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.543752 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.544051 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.544417 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.566626 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf49q\" (UniqueName: \"kubernetes.io/projected/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-api-access-pf49q\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.610932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpprt\" (UniqueName: \"kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611061 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmm59\" (UniqueName: \"kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611095 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611265 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611331 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611368 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611456 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.619704 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.626479 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.629594 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.640228 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.646135 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmm59\" (UniqueName: \"kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.647655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpprt\" (UniqueName: \"kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.654091 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.665410 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.677570 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.742391 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.763660 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.329645 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" path="/var/lib/kubelet/pods/3345073b-8907-4de9-829f-73d8e79a01bb/volumes" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.330845 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" path="/var/lib/kubelet/pods/3c6ca904-2790-425f-81ac-37cdc543cf0f/volumes" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.331358 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" path="/var/lib/kubelet/pods/c5b6c25e-e882-4ea4-a284-6f55bfe75093/volumes" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.684497 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:17 crc kubenswrapper[4881]: W0121 11:23:17.722116 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e33ff3f_b508_4ac4_9a60_6189a65be2a6.slice/crio-77720f409630e323e17d0bdf3c7919468d28beac14e84407eb9a7547caf761d6 WatchSource:0}: Error finding container 77720f409630e323e17d0bdf3c7919468d28beac14e84407eb9a7547caf761d6: Status 404 returned error can't find the container with id 77720f409630e323e17d0bdf3c7919468d28beac14e84407eb9a7547caf761d6 Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.732199 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.740818 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs\") pod \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.740925 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle\") pod \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.741305 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data\") pod \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.741348 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lqtx\" (UniqueName: \"kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx\") pod \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.742142 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs" (OuterVolumeSpecName: "logs") pod "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" (UID: "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.756055 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx" (OuterVolumeSpecName: "kube-api-access-2lqtx") pod "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" (UID: "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c"). InnerVolumeSpecName "kube-api-access-2lqtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.763276 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.797329 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" (UID: "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.825116 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data" (OuterVolumeSpecName: "config-data") pod "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" (UID: "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.844442 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.844488 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lqtx\" (UniqueName: \"kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.844503 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.844513 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.917401 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:17 crc kubenswrapper[4881]: W0121 11:23:17.918968 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f1fb00c_903a_48c9_95e5_8ad34c731f41.slice/crio-b3157e678fa44dfdf1c50a29c3af5b7c20661b982fcfdccdd420bdba43c8cf36 WatchSource:0}: Error finding container b3157e678fa44dfdf1c50a29c3af5b7c20661b982fcfdccdd420bdba43c8cf36: Status 404 returned error can't find the container with id b3157e678fa44dfdf1c50a29c3af5b7c20661b982fcfdccdd420bdba43c8cf36 Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.999045 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.999563 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-central-agent" containerID="cri-o://8256e63406ff9c5a7c526341a649b275e3f5ab402c57f45ac53e47b1d11393f9" gracePeriod=30 Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.999643 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="proxy-httpd" containerID="cri-o://ebf63005cec886f7073127e6f8a1b1d91309382b4d83ebbd9aca189eabae9b37" gracePeriod=30 Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.999698 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-notification-agent" containerID="cri-o://f833baf807f57255c45be1ba58cccaca032385ccba346e4fc3846694862bc6ee" gracePeriod=30 Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.999699 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="sg-core" containerID="cri-o://19d2c0708e63a625c9564d43bfbff6b4bf382eb29c4f5fe75600d774080fe1d6" gracePeriod=30 Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.214901 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e33ff3f-b508-4ac4-9a60-6189a65be2a6","Type":"ContainerStarted","Data":"77720f409630e323e17d0bdf3c7919468d28beac14e84407eb9a7547caf761d6"} Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.216017 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerStarted","Data":"94be8c422811e4e8ba1078eb2e0e3d71d40e6f5e6c07d283df8a7544b7b7a114"} Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.218436 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerDied","Data":"a5806f41aee852119e408747b6a9159dc66b4ea14896033d8861a45a5e319518"} Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.218494 4881 scope.go:117] "RemoveContainer" containerID="b1e94b3b719b1a2213452fd275be74fdb796e7c03d99fa5695466085e68a91fd" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.218494 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.226165 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f1fb00c-903a-48c9-95e5-8ad34c731f41","Type":"ContainerStarted","Data":"b3157e678fa44dfdf1c50a29c3af5b7c20661b982fcfdccdd420bdba43c8cf36"} Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.258279 4881 scope.go:117] "RemoveContainer" containerID="71bf37a912cb19763de6a839082bf72ecae64d550a077ed5461e0d2fa0d9be80" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.284965 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.294568 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.305552 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:18 crc kubenswrapper[4881]: E0121 11:23:18.306250 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-api" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.306273 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-api" Jan 21 11:23:18 crc kubenswrapper[4881]: E0121 11:23:18.306286 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-log" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.306294 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-log" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.306507 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-api" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.306527 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-log" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.307832 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.313358 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.315307 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.358536 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.358610 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.358639 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7klwj\" (UniqueName: \"kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.358775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.460769 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7klwj\" (UniqueName: \"kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.460931 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.461021 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.461080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.462276 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.471697 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.471973 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.491475 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7klwj\" (UniqueName: \"kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.636561 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:19 crc kubenswrapper[4881]: I0121 11:23:19.110515 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:19 crc kubenswrapper[4881]: I0121 11:23:19.648063 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" path="/var/lib/kubelet/pods/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c/volumes" Jan 21 11:23:19 crc kubenswrapper[4881]: I0121 11:23:19.654571 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerStarted","Data":"9b384c1c04b091d7070db9b5be692cbf3307b83743e8c28c7fc7e9002650814f"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.678959 4881 generic.go:334] "Generic (PLEG): container finished" podID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerID="ebf63005cec886f7073127e6f8a1b1d91309382b4d83ebbd9aca189eabae9b37" exitCode=0 Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.679523 4881 generic.go:334] "Generic (PLEG): container finished" podID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerID="19d2c0708e63a625c9564d43bfbff6b4bf382eb29c4f5fe75600d774080fe1d6" exitCode=2 Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.679533 4881 generic.go:334] "Generic (PLEG): container finished" podID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerID="8256e63406ff9c5a7c526341a649b275e3f5ab402c57f45ac53e47b1d11393f9" exitCode=0 Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.679150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerDied","Data":"ebf63005cec886f7073127e6f8a1b1d91309382b4d83ebbd9aca189eabae9b37"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.679598 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerDied","Data":"19d2c0708e63a625c9564d43bfbff6b4bf382eb29c4f5fe75600d774080fe1d6"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.679612 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerDied","Data":"8256e63406ff9c5a7c526341a649b275e3f5ab402c57f45ac53e47b1d11393f9"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.682764 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerStarted","Data":"bb359efc78c8172dc142be7dbd66247c577cc9e68e31667efda8eaa45e2b6e87"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.684744 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerStarted","Data":"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.686620 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f1fb00c-903a-48c9-95e5-8ad34c731f41","Type":"ContainerStarted","Data":"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.720330 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.720307764 podStartE2EDuration="4.720307764s" podCreationTimestamp="2026-01-21 11:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:20.710528764 +0000 UTC m=+1587.970485233" watchObservedRunningTime="2026-01-21 11:23:20.720307764 +0000 UTC m=+1587.980264233" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.700750 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerStarted","Data":"2dfa759ad5f3629117201697e51e9070f4706b866df3273a3c40b4948e6b8705"} Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.703838 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerStarted","Data":"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21"} Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.709193 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e33ff3f-b508-4ac4-9a60-6189a65be2a6","Type":"ContainerStarted","Data":"2c2969eba13541bcaf91a75b7beeb9e4ac3bc6b6be20cbcb1615223e9a1d0b46"} Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.709362 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.728539 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.72851765 podStartE2EDuration="3.72851765s" podCreationTimestamp="2026-01-21 11:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:21.719423378 +0000 UTC m=+1588.979379847" watchObservedRunningTime="2026-01-21 11:23:21.72851765 +0000 UTC m=+1588.988474119" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.742224 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.150349048 podStartE2EDuration="5.742201115s" podCreationTimestamp="2026-01-21 11:23:16 +0000 UTC" firstStartedPulling="2026-01-21 11:23:17.727412403 +0000 UTC m=+1584.987368872" lastFinishedPulling="2026-01-21 11:23:20.31926447 +0000 UTC m=+1587.579220939" observedRunningTime="2026-01-21 11:23:21.738189327 +0000 UTC m=+1588.998145806" watchObservedRunningTime="2026-01-21 11:23:21.742201115 +0000 UTC m=+1589.002157584" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.743281 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.743387 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.765599 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.775236 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.775216374 podStartE2EDuration="5.775216374s" podCreationTimestamp="2026-01-21 11:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:21.761876277 +0000 UTC m=+1589.021832746" watchObservedRunningTime="2026-01-21 11:23:21.775216374 +0000 UTC m=+1589.035172833" Jan 21 11:23:22 crc kubenswrapper[4881]: I0121 11:23:22.618755 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:22 crc kubenswrapper[4881]: I0121 11:23:22.621397 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:22 crc kubenswrapper[4881]: I0121 11:23:22.691976 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:22 crc kubenswrapper[4881]: I0121 11:23:22.692039 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctnrj\" (UniqueName: \"kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:22 crc kubenswrapper[4881]: I0121 11:23:22.692181 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.127256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.127311 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctnrj\" (UniqueName: \"kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.127389 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.133077 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.135436 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.200869 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.240729 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctnrj\" (UniqueName: \"kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.464552 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.980011 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:24 crc kubenswrapper[4881]: I0121 11:23:24.307842 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerStarted","Data":"e500de19668bd863773799072a1748fadbbfeb7a569a7019d89d37c178966126"} Jan 21 11:23:24 crc kubenswrapper[4881]: I0121 11:23:24.354239 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.330473 4881 generic.go:334] "Generic (PLEG): container finished" podID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerID="f833baf807f57255c45be1ba58cccaca032385ccba346e4fc3846694862bc6ee" exitCode=0 Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.330858 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerDied","Data":"f833baf807f57255c45be1ba58cccaca032385ccba346e4fc3846694862bc6ee"} Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.332822 4881 generic.go:334] "Generic (PLEG): container finished" podID="eb575609-e27b-438e-b305-754fed7dbd0c" containerID="1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34" exitCode=0 Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.332872 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerDied","Data":"1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34"} Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.557443 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.721874 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.721961 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.722066 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.722136 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.722178 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.722199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.722265 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgzxk\" (UniqueName: \"kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.723044 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.723594 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.737826 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk" (OuterVolumeSpecName: "kube-api-access-zgzxk") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "kube-api-access-zgzxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.741617 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts" (OuterVolumeSpecName: "scripts") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.758124 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.808432 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824476 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824513 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824522 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824531 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgzxk\" (UniqueName: \"kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824541 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824550 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.858423 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data" (OuterVolumeSpecName: "config-data") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.931209 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.347264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerDied","Data":"98b63a4387f707fe8989f7007a02efb416a3ce182b681d864a6fffaef05cd43d"} Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.348606 4881 scope.go:117] "RemoveContainer" containerID="ebf63005cec886f7073127e6f8a1b1d91309382b4d83ebbd9aca189eabae9b37" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.347291 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.381979 4881 scope.go:117] "RemoveContainer" containerID="19d2c0708e63a625c9564d43bfbff6b4bf382eb29c4f5fe75600d774080fe1d6" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.392901 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.412368 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.426101 4881 scope.go:117] "RemoveContainer" containerID="f833baf807f57255c45be1ba58cccaca032385ccba346e4fc3846694862bc6ee" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.428477 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:26 crc kubenswrapper[4881]: E0121 11:23:26.429023 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="proxy-httpd" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429044 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="proxy-httpd" Jan 21 11:23:26 crc kubenswrapper[4881]: E0121 11:23:26.429063 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-central-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429071 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-central-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: E0121 11:23:26.429089 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="sg-core" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429096 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="sg-core" Jan 21 11:23:26 crc kubenswrapper[4881]: E0121 11:23:26.429124 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-notification-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429131 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-notification-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429355 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-notification-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429380 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="sg-core" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429394 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-central-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429404 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="proxy-httpd" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.432016 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.435494 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.435499 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.435868 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.459935 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.463432 4881 scope.go:117] "RemoveContainer" containerID="8256e63406ff9c5a7c526341a649b275e3f5ab402c57f45ac53e47b1d11393f9" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.559881 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.559960 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560139 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560213 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560252 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560322 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560407 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc45w\" (UniqueName: \"kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662216 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662286 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662339 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662381 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc45w\" (UniqueName: \"kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662428 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662466 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662526 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.663420 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.664587 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.668576 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.669277 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.671397 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.672832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.683349 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.687121 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc45w\" (UniqueName: \"kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.704801 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.743810 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.743905 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.760492 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.765386 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.812625 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 11:23:27 crc kubenswrapper[4881]: I0121 11:23:27.325699 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" path="/var/lib/kubelet/pods/20eeb602-9c98-48ed-a9c9-22121156e8cb/volumes" Jan 21 11:23:27 crc kubenswrapper[4881]: I0121 11:23:27.431144 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 11:23:27 crc kubenswrapper[4881]: I0121 11:23:27.762072 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:27 crc kubenswrapper[4881]: I0121 11:23:27.762712 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:27 crc kubenswrapper[4881]: I0121 11:23:27.921478 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:28 crc kubenswrapper[4881]: I0121 11:23:28.407095 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerStarted","Data":"66e45f9085cd7aa6bc51a5b18dd439286f856ddcee2ed6d0f6e2f8de173537a4"} Jan 21 11:23:28 crc kubenswrapper[4881]: I0121 11:23:28.409510 4881 generic.go:334] "Generic (PLEG): container finished" podID="eb575609-e27b-438e-b305-754fed7dbd0c" containerID="e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3" exitCode=0 Jan 21 11:23:28 crc kubenswrapper[4881]: I0121 11:23:28.409846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerDied","Data":"e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3"} Jan 21 11:23:28 crc kubenswrapper[4881]: I0121 11:23:28.637874 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:28 crc kubenswrapper[4881]: I0121 11:23:28.637930 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:29 crc kubenswrapper[4881]: I0121 11:23:29.423111 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerStarted","Data":"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37"} Jan 21 11:23:29 crc kubenswrapper[4881]: I0121 11:23:29.721009 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:29 crc kubenswrapper[4881]: I0121 11:23:29.721003 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:29 crc kubenswrapper[4881]: I0121 11:23:29.851021 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:23:29 crc kubenswrapper[4881]: I0121 11:23:29.851109 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.248618 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.253399 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.830676 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.830953 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.830988 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkwt4\" (UniqueName: \"kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.905666 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.932377 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.932808 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.932902 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkwt4\" (UniqueName: \"kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.934019 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.934636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.968133 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkwt4\" (UniqueName: \"kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.077626 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.222007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerStarted","Data":"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925"} Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.230251 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerStarted","Data":"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e"} Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.232010 4881 generic.go:334] "Generic (PLEG): container finished" podID="50ff1a29-d6ee-4911-bb22-165aca6d8605" containerID="9d3665845c2c2c09903d0aa16a7538de5b4dcf05cef7d82865d9c9d446cdaf41" exitCode=137 Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.232055 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50ff1a29-d6ee-4911-bb22-165aca6d8605","Type":"ContainerDied","Data":"9d3665845c2c2c09903d0aa16a7538de5b4dcf05cef7d82865d9c9d446cdaf41"} Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.232083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50ff1a29-d6ee-4911-bb22-165aca6d8605","Type":"ContainerDied","Data":"6aaf4e142828aa790e377df87440347084937144bb74fce4d8edde8de8915f28"} Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.232106 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aaf4e142828aa790e377df87440347084937144bb74fce4d8edde8de8915f28" Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.281218 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-llk4v" podStartSLOduration=5.528926033 podStartE2EDuration="11.281194276s" podCreationTimestamp="2026-01-21 11:23:22 +0000 UTC" firstStartedPulling="2026-01-21 11:23:25.336073278 +0000 UTC m=+1592.596029747" lastFinishedPulling="2026-01-21 11:23:31.088341521 +0000 UTC m=+1598.348297990" observedRunningTime="2026-01-21 11:23:33.272549324 +0000 UTC m=+1600.532505793" watchObservedRunningTime="2026-01-21 11:23:33.281194276 +0000 UTC m=+1600.541150765" Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.302710 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.359381 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data\") pod \"50ff1a29-d6ee-4911-bb22-165aca6d8605\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.359628 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz64s\" (UniqueName: \"kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s\") pod \"50ff1a29-d6ee-4911-bb22-165aca6d8605\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.359672 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle\") pod \"50ff1a29-d6ee-4911-bb22-165aca6d8605\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.381502 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s" (OuterVolumeSpecName: "kube-api-access-xz64s") pod "50ff1a29-d6ee-4911-bb22-165aca6d8605" (UID: "50ff1a29-d6ee-4911-bb22-165aca6d8605"). InnerVolumeSpecName "kube-api-access-xz64s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.448694 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data" (OuterVolumeSpecName: "config-data") pod "50ff1a29-d6ee-4911-bb22-165aca6d8605" (UID: "50ff1a29-d6ee-4911-bb22-165aca6d8605"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.454115 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50ff1a29-d6ee-4911-bb22-165aca6d8605" (UID: "50ff1a29-d6ee-4911-bb22-165aca6d8605"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.463096 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz64s\" (UniqueName: \"kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.463126 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.463136 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.552336 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.552373 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.757237 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.250838 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerStarted","Data":"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6"} Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.255382 4881 generic.go:334] "Generic (PLEG): container finished" podID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerID="839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7" exitCode=0 Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.255542 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.255519 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerDied","Data":"839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7"} Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.255613 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerStarted","Data":"1de739443c6dfd6b37749b58394c2360dea5377c680c0b8dae6cbb306ba43ef6"} Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.327659 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.341660 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.362793 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:34 crc kubenswrapper[4881]: E0121 11:23:34.363425 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ff1a29-d6ee-4911-bb22-165aca6d8605" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.363450 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ff1a29-d6ee-4911-bb22-165aca6d8605" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.363732 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ff1a29-d6ee-4911-bb22-165aca6d8605" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.364600 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.368301 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.368850 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.368886 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.380081 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.431351 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2fc\" (UniqueName: \"kubernetes.io/projected/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-kube-api-access-bt2fc\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.431718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.431941 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.431960 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.431979 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.534676 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.535053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.535156 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.535332 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt2fc\" (UniqueName: \"kubernetes.io/projected/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-kube-api-access-bt2fc\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.535539 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.544064 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.544467 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.546561 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.549420 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.555628 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-llk4v" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="registry-server" probeResult="failure" output=< Jan 21 11:23:34 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:23:34 crc kubenswrapper[4881]: > Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.556567 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt2fc\" (UniqueName: \"kubernetes.io/projected/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-kube-api-access-bt2fc\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.734674 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:35 crc kubenswrapper[4881]: I0121 11:23:35.326762 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ff1a29-d6ee-4911-bb22-165aca6d8605" path="/var/lib/kubelet/pods/50ff1a29-d6ee-4911-bb22-165aca6d8605/volumes" Jan 21 11:23:35 crc kubenswrapper[4881]: I0121 11:23:35.545797 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.289576 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerStarted","Data":"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69"} Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.290596 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.291917 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b9ce9000-94ef-4f6e-8bc7-97feca616b9e","Type":"ContainerStarted","Data":"39179a3f03cf7c0e700dc4ab827a9768bb1a1685b7d25388ec54358da8590f28"} Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.292515 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b9ce9000-94ef-4f6e-8bc7-97feca616b9e","Type":"ContainerStarted","Data":"2a8a88246eed90b5f605d9f43551dceedbd8321c987cdcb16739add4b22765d2"} Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.294698 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerStarted","Data":"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12"} Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.324982 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.895448176 podStartE2EDuration="10.324958664s" podCreationTimestamp="2026-01-21 11:23:26 +0000 UTC" firstStartedPulling="2026-01-21 11:23:27.916357873 +0000 UTC m=+1595.176314332" lastFinishedPulling="2026-01-21 11:23:35.345868351 +0000 UTC m=+1602.605824820" observedRunningTime="2026-01-21 11:23:36.313738189 +0000 UTC m=+1603.573694658" watchObservedRunningTime="2026-01-21 11:23:36.324958664 +0000 UTC m=+1603.584915133" Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.387623 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.387583278 podStartE2EDuration="2.387583278s" podCreationTimestamp="2026-01-21 11:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:36.356467386 +0000 UTC m=+1603.616423855" watchObservedRunningTime="2026-01-21 11:23:36.387583278 +0000 UTC m=+1603.647539747" Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.751835 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.753838 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.770420 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:23:37 crc kubenswrapper[4881]: I0121 11:23:37.308714 4881 generic.go:334] "Generic (PLEG): container finished" podID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerID="fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12" exitCode=0 Jan 21 11:23:37 crc kubenswrapper[4881]: I0121 11:23:37.308772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerDied","Data":"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12"} Jan 21 11:23:37 crc kubenswrapper[4881]: I0121 11:23:37.327564 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:23:38 crc kubenswrapper[4881]: I0121 11:23:38.657304 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:23:38 crc kubenswrapper[4881]: I0121 11:23:38.659024 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:23:38 crc kubenswrapper[4881]: I0121 11:23:38.704246 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:23:38 crc kubenswrapper[4881]: I0121 11:23:38.726553 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.351406 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerStarted","Data":"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6"} Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.352730 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.366869 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.393590 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vpxn7" podStartSLOduration=3.904658118 podStartE2EDuration="7.39356446s" podCreationTimestamp="2026-01-21 11:23:32 +0000 UTC" firstStartedPulling="2026-01-21 11:23:34.257819939 +0000 UTC m=+1601.517776398" lastFinishedPulling="2026-01-21 11:23:37.746726271 +0000 UTC m=+1605.006682740" observedRunningTime="2026-01-21 11:23:39.371327516 +0000 UTC m=+1606.631284015" watchObservedRunningTime="2026-01-21 11:23:39.39356446 +0000 UTC m=+1606.653520939" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.631672 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.633602 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.667355 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688477 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688520 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688546 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwg4c\" (UniqueName: \"kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688570 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688599 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.735635 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.790939 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.791097 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.791165 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.791183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwg4c\" (UniqueName: \"kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.791212 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.791244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.792043 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.792175 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.792192 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.792547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.792574 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.824129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwg4c\" (UniqueName: \"kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.986712 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:40 crc kubenswrapper[4881]: I0121 11:23:40.946701 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:23:41 crc kubenswrapper[4881]: I0121 11:23:41.633761 4881 generic.go:334] "Generic (PLEG): container finished" podID="81dbec06-59d7-4c42-a558-910811fb3811" containerID="7b3d565271b021e09dee5880082bea3cf44364df7d0a06382823cae7b26b1046" exitCode=0 Jan 21 11:23:41 crc kubenswrapper[4881]: I0121 11:23:41.633823 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" event={"ID":"81dbec06-59d7-4c42-a558-910811fb3811","Type":"ContainerDied","Data":"7b3d565271b021e09dee5880082bea3cf44364df7d0a06382823cae7b26b1046"} Jan 21 11:23:41 crc kubenswrapper[4881]: I0121 11:23:41.634379 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" event={"ID":"81dbec06-59d7-4c42-a558-910811fb3811","Type":"ContainerStarted","Data":"14e34995d6813b59d5fbddbd68a531e00edeb5c9ae370d72d56de9da156f7345"} Jan 21 11:23:42 crc kubenswrapper[4881]: I0121 11:23:42.666470 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" event={"ID":"81dbec06-59d7-4c42-a558-910811fb3811","Type":"ContainerStarted","Data":"a807273d95c9864f3ecabade018dc0a91eb28a83bcfcbef9786d9473502a12a5"} Jan 21 11:23:42 crc kubenswrapper[4881]: I0121 11:23:42.666844 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:42 crc kubenswrapper[4881]: I0121 11:23:42.699534 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" podStartSLOduration=3.69950894 podStartE2EDuration="3.69950894s" podCreationTimestamp="2026-01-21 11:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:42.690569692 +0000 UTC m=+1609.950526171" watchObservedRunningTime="2026-01-21 11:23:42.69950894 +0000 UTC m=+1609.959465409" Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.069046 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.069628 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-log" containerID="cri-o://bb359efc78c8172dc142be7dbd66247c577cc9e68e31667efda8eaa45e2b6e87" gracePeriod=30 Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.069716 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-api" containerID="cri-o://2dfa759ad5f3629117201697e51e9070f4706b866df3273a3c40b4948e6b8705" gracePeriod=30 Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.081818 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.083354 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.524136 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.575138 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.679611 4881 generic.go:334] "Generic (PLEG): container finished" podID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerID="bb359efc78c8172dc142be7dbd66247c577cc9e68e31667efda8eaa45e2b6e87" exitCode=143 Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.679651 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerDied","Data":"bb359efc78c8172dc142be7dbd66247c577cc9e68e31667efda8eaa45e2b6e87"} Jan 21 11:23:44 crc kubenswrapper[4881]: I0121 11:23:44.191130 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vpxn7" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="registry-server" probeResult="failure" output=< Jan 21 11:23:44 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:23:44 crc kubenswrapper[4881]: > Jan 21 11:23:44 crc kubenswrapper[4881]: I0121 11:23:44.232601 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:44 crc kubenswrapper[4881]: I0121 11:23:44.692246 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-llk4v" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="registry-server" containerID="cri-o://2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925" gracePeriod=2 Jan 21 11:23:44 crc kubenswrapper[4881]: I0121 11:23:44.735420 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.027967 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.356273 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.516599 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctnrj\" (UniqueName: \"kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj\") pod \"eb575609-e27b-438e-b305-754fed7dbd0c\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.517312 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content\") pod \"eb575609-e27b-438e-b305-754fed7dbd0c\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.517456 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities\") pod \"eb575609-e27b-438e-b305-754fed7dbd0c\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.517938 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities" (OuterVolumeSpecName: "utilities") pod "eb575609-e27b-438e-b305-754fed7dbd0c" (UID: "eb575609-e27b-438e-b305-754fed7dbd0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.518872 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.528368 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj" (OuterVolumeSpecName: "kube-api-access-ctnrj") pod "eb575609-e27b-438e-b305-754fed7dbd0c" (UID: "eb575609-e27b-438e-b305-754fed7dbd0c"). InnerVolumeSpecName "kube-api-access-ctnrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.560108 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb575609-e27b-438e-b305-754fed7dbd0c" (UID: "eb575609-e27b-438e-b305-754fed7dbd0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.620991 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctnrj\" (UniqueName: \"kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.621030 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.705209 4881 generic.go:334] "Generic (PLEG): container finished" podID="eb575609-e27b-438e-b305-754fed7dbd0c" containerID="2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925" exitCode=0 Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.705278 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.705295 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerDied","Data":"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925"} Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.706103 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerDied","Data":"e500de19668bd863773799072a1748fadbbfeb7a569a7019d89d37c178966126"} Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.706153 4881 scope.go:117] "RemoveContainer" containerID="2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.727618 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.749560 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.750127 4881 scope.go:117] "RemoveContainer" containerID="e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.762103 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.762401 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-central-agent" containerID="cri-o://21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37" gracePeriod=30 Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.762549 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="proxy-httpd" containerID="cri-o://23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69" gracePeriod=30 Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.762602 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="sg-core" containerID="cri-o://8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6" gracePeriod=30 Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.762639 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-notification-agent" containerID="cri-o://a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e" gracePeriod=30 Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.776017 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.815841 4881 scope.go:117] "RemoveContainer" containerID="1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.866249 4881 scope.go:117] "RemoveContainer" containerID="2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925" Jan 21 11:23:45 crc kubenswrapper[4881]: E0121 11:23:45.866812 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925\": container with ID starting with 2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925 not found: ID does not exist" containerID="2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.866915 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925"} err="failed to get container status \"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925\": rpc error: code = NotFound desc = could not find container \"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925\": container with ID starting with 2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925 not found: ID does not exist" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.867016 4881 scope.go:117] "RemoveContainer" containerID="e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3" Jan 21 11:23:45 crc kubenswrapper[4881]: E0121 11:23:45.867359 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3\": container with ID starting with e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3 not found: ID does not exist" containerID="e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.867451 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3"} err="failed to get container status \"e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3\": rpc error: code = NotFound desc = could not find container \"e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3\": container with ID starting with e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3 not found: ID does not exist" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.867528 4881 scope.go:117] "RemoveContainer" containerID="1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34" Jan 21 11:23:45 crc kubenswrapper[4881]: E0121 11:23:45.867839 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34\": container with ID starting with 1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34 not found: ID does not exist" containerID="1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.867942 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34"} err="failed to get container status \"1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34\": rpc error: code = NotFound desc = could not find container \"1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34\": container with ID starting with 1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34 not found: ID does not exist" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.999753 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bdc49"] Jan 21 11:23:46 crc kubenswrapper[4881]: E0121 11:23:46.000323 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="extract-content" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.000341 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="extract-content" Jan 21 11:23:46 crc kubenswrapper[4881]: E0121 11:23:46.000369 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="registry-server" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.000376 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="registry-server" Jan 21 11:23:46 crc kubenswrapper[4881]: E0121 11:23:46.000386 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="extract-utilities" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.000392 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="extract-utilities" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.000598 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="registry-server" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.001384 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.004326 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.004545 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.011839 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bdc49"] Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.148430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.148491 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvcfc\" (UniqueName: \"kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.148516 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.148619 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.250208 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.250491 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvcfc\" (UniqueName: \"kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.250514 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.250579 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.257933 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.263894 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.270707 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.275406 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvcfc\" (UniqueName: \"kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.340798 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.997417 4881 generic.go:334] "Generic (PLEG): container finished" podID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerID="2dfa759ad5f3629117201697e51e9070f4706b866df3273a3c40b4948e6b8705" exitCode=0 Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.997689 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerDied","Data":"2dfa759ad5f3629117201697e51e9070f4706b866df3273a3c40b4948e6b8705"} Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.004027 4881 generic.go:334] "Generic (PLEG): container finished" podID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerID="23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69" exitCode=0 Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.004054 4881 generic.go:334] "Generic (PLEG): container finished" podID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerID="8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6" exitCode=2 Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.004062 4881 generic.go:334] "Generic (PLEG): container finished" podID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerID="21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37" exitCode=0 Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.006175 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerDied","Data":"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69"} Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.007031 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerDied","Data":"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6"} Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.007062 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerDied","Data":"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37"} Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.052733 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bdc49"] Jan 21 11:23:47 crc kubenswrapper[4881]: W0121 11:23:47.095042 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d8ffc48_6b0f_48d1_b13d_8a766f5b604a.slice/crio-319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec WatchSource:0}: Error finding container 319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec: Status 404 returned error can't find the container with id 319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.334921 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" path="/var/lib/kubelet/pods/eb575609-e27b-438e-b305-754fed7dbd0c/volumes" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.606615 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.727195 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7klwj\" (UniqueName: \"kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj\") pod \"cb8d5e00-825f-4df2-9720-3de7be3e0837\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.727281 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle\") pod \"cb8d5e00-825f-4df2-9720-3de7be3e0837\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.727372 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data\") pod \"cb8d5e00-825f-4df2-9720-3de7be3e0837\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.727415 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs\") pod \"cb8d5e00-825f-4df2-9720-3de7be3e0837\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.728584 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs" (OuterVolumeSpecName: "logs") pod "cb8d5e00-825f-4df2-9720-3de7be3e0837" (UID: "cb8d5e00-825f-4df2-9720-3de7be3e0837"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.735471 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj" (OuterVolumeSpecName: "kube-api-access-7klwj") pod "cb8d5e00-825f-4df2-9720-3de7be3e0837" (UID: "cb8d5e00-825f-4df2-9720-3de7be3e0837"). InnerVolumeSpecName "kube-api-access-7klwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.782358 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data" (OuterVolumeSpecName: "config-data") pod "cb8d5e00-825f-4df2-9720-3de7be3e0837" (UID: "cb8d5e00-825f-4df2-9720-3de7be3e0837"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.798551 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb8d5e00-825f-4df2-9720-3de7be3e0837" (UID: "cb8d5e00-825f-4df2-9720-3de7be3e0837"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.820015 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.830859 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.831219 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.831234 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7klwj\" (UniqueName: \"kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.831249 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932476 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932607 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932666 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932703 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932732 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932809 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932874 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932915 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc45w\" (UniqueName: \"kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.934913 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.935624 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.953435 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w" (OuterVolumeSpecName: "kube-api-access-bc45w") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "kube-api-access-bc45w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.953932 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts" (OuterVolumeSpecName: "scripts") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.041235 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.041261 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.041270 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.041279 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc45w\" (UniqueName: \"kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.043599 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bdc49" event={"ID":"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a","Type":"ContainerStarted","Data":"62b5fd9972946ab2305558cba9c0d54f5b29b725654cb25337e61434a431d9ea"} Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.043654 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bdc49" event={"ID":"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a","Type":"ContainerStarted","Data":"319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec"} Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.049685 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.058346 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerDied","Data":"9b384c1c04b091d7070db9b5be692cbf3307b83743e8c28c7fc7e9002650814f"} Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.058406 4881 scope.go:117] "RemoveContainer" containerID="2dfa759ad5f3629117201697e51e9070f4706b866df3273a3c40b4948e6b8705" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.058590 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.074054 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bdc49" podStartSLOduration=3.074029371 podStartE2EDuration="3.074029371s" podCreationTimestamp="2026-01-21 11:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:48.065251205 +0000 UTC m=+1615.325207694" watchObservedRunningTime="2026-01-21 11:23:48.074029371 +0000 UTC m=+1615.333985840" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.075565 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.091642 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.092988 4881 generic.go:334] "Generic (PLEG): container finished" podID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerID="a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e" exitCode=0 Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.093038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerDied","Data":"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e"} Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.093067 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerDied","Data":"66e45f9085cd7aa6bc51a5b18dd439286f856ddcee2ed6d0f6e2f8de173537a4"} Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.093168 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.114893 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data" (OuterVolumeSpecName: "config-data") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.143046 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.143321 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.143454 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.143563 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.191084 4881 scope.go:117] "RemoveContainer" containerID="bb359efc78c8172dc142be7dbd66247c577cc9e68e31667efda8eaa45e2b6e87" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.216764 4881 scope.go:117] "RemoveContainer" containerID="23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.219389 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.240887 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.253702 4881 scope.go:117] "RemoveContainer" containerID="8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.267410 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284349 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-notification-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284385 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-notification-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284403 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="proxy-httpd" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284409 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="proxy-httpd" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284426 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-api" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284433 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-api" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284464 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="sg-core" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284473 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="sg-core" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284487 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-log" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284494 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-log" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284512 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-central-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284520 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-central-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284774 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="proxy-httpd" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284810 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-central-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284822 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-notification-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284830 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-log" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284840 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-api" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284846 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="sg-core" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.286089 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.302982 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.303395 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.303527 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.307068 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358116 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mlh\" (UniqueName: \"kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358180 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358214 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358239 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358277 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358320 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.391967 4881 scope.go:117] "RemoveContainer" containerID="a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460144 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460246 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460377 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57mlh\" (UniqueName: \"kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460445 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460495 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460527 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.466310 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.477171 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.477584 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.479266 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.482766 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.485950 4881 scope.go:117] "RemoveContainer" containerID="21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.502879 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.508376 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57mlh\" (UniqueName: \"kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.536379 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.548846 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.551665 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.552573 4881 scope.go:117] "RemoveContainer" containerID="23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.560995 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69\": container with ID starting with 23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69 not found: ID does not exist" containerID="23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.561074 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69"} err="failed to get container status \"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69\": rpc error: code = NotFound desc = could not find container \"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69\": container with ID starting with 23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69 not found: ID does not exist" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.561126 4881 scope.go:117] "RemoveContainer" containerID="8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.561194 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.561443 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.561557 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.569156 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6\": container with ID starting with 8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6 not found: ID does not exist" containerID="8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.569489 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6"} err="failed to get container status \"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6\": rpc error: code = NotFound desc = could not find container \"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6\": container with ID starting with 8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6 not found: ID does not exist" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.569530 4881 scope.go:117] "RemoveContainer" containerID="a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.570325 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e\": container with ID starting with a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e not found: ID does not exist" containerID="a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.570369 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e"} err="failed to get container status \"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e\": rpc error: code = NotFound desc = could not find container \"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e\": container with ID starting with a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e not found: ID does not exist" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.570399 4881 scope.go:117] "RemoveContainer" containerID="21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.570746 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37\": container with ID starting with 21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37 not found: ID does not exist" containerID="21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.570773 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37"} err="failed to get container status \"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37\": rpc error: code = NotFound desc = could not find container \"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37\": container with ID starting with 21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37 not found: ID does not exist" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.581173 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.633156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.671679 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.671847 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.671961 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-log-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.672012 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-config-data\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.672035 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-scripts\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.672082 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6dvp\" (UniqueName: \"kubernetes.io/projected/5926a818-11da-4b6b-bae0-79e6d9e10728-kube-api-access-n6dvp\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.672155 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.672279 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-run-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.773933 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779076 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-run-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779299 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779487 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779630 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-run-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779799 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-log-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779928 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-config-data\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779961 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-scripts\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.780082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6dvp\" (UniqueName: \"kubernetes.io/projected/5926a818-11da-4b6b-bae0-79e6d9e10728-kube-api-access-n6dvp\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.784444 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-scripts\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.784760 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-log-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.784883 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-config-data\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.785711 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.786169 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.789390 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.805166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6dvp\" (UniqueName: \"kubernetes.io/projected/5926a818-11da-4b6b-bae0-79e6d9e10728-kube-api-access-n6dvp\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.886699 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:49 crc kubenswrapper[4881]: I0121 11:23:49.200122 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:49 crc kubenswrapper[4881]: W0121 11:23:49.205672 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda2439be_4ed2_43a2_adbe_dd4afaa012f3.slice/crio-78fa7e5c3484fc7a90c022f360abd4837962f6679c1a08c1b9fdb22f193c9f13 WatchSource:0}: Error finding container 78fa7e5c3484fc7a90c022f360abd4837962f6679c1a08c1b9fdb22f193c9f13: Status 404 returned error can't find the container with id 78fa7e5c3484fc7a90c022f360abd4837962f6679c1a08c1b9fdb22f193c9f13 Jan 21 11:23:49 crc kubenswrapper[4881]: I0121 11:23:49.337149 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" path="/var/lib/kubelet/pods/201fb26a-87ca-4563-a6ae-1279da9cf1d9/volumes" Jan 21 11:23:49 crc kubenswrapper[4881]: I0121 11:23:49.339038 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" path="/var/lib/kubelet/pods/cb8d5e00-825f-4df2-9720-3de7be3e0837/volumes" Jan 21 11:23:49 crc kubenswrapper[4881]: W0121 11:23:49.852090 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5926a818_11da_4b6b_bae0_79e6d9e10728.slice/crio-213d22bfba6e2e90a4613f8839c8270b703a1296c47a8cbf11e9134711d81ca7 WatchSource:0}: Error finding container 213d22bfba6e2e90a4613f8839c8270b703a1296c47a8cbf11e9134711d81ca7: Status 404 returned error can't find the container with id 213d22bfba6e2e90a4613f8839c8270b703a1296c47a8cbf11e9134711d81ca7 Jan 21 11:23:49 crc kubenswrapper[4881]: I0121 11:23:49.852845 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:49 crc kubenswrapper[4881]: I0121 11:23:49.987685 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.115213 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.115508 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="dnsmasq-dns" containerID="cri-o://ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0" gracePeriod=10 Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.150092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5926a818-11da-4b6b-bae0-79e6d9e10728","Type":"ContainerStarted","Data":"213d22bfba6e2e90a4613f8839c8270b703a1296c47a8cbf11e9134711d81ca7"} Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.159177 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerStarted","Data":"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25"} Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.159240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerStarted","Data":"78fa7e5c3484fc7a90c022f360abd4837962f6679c1a08c1b9fdb22f193c9f13"} Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.758816 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.808974 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.809933 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.810117 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.810534 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.810949 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prhq6\" (UniqueName: \"kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.811298 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.835153 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6" (OuterVolumeSpecName: "kube-api-access-prhq6") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "kube-api-access-prhq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.901436 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.915096 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prhq6\" (UniqueName: \"kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.915128 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.915940 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config" (OuterVolumeSpecName: "config") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.937828 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.949287 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.956360 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.017017 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.017462 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.017532 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.017588 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.171721 4881 generic.go:334] "Generic (PLEG): container finished" podID="859758f9-0dc2-4397-a75a-b098eaabe613" containerID="ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0" exitCode=0 Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.171803 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.171840 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" event={"ID":"859758f9-0dc2-4397-a75a-b098eaabe613","Type":"ContainerDied","Data":"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0"} Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.171935 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" event={"ID":"859758f9-0dc2-4397-a75a-b098eaabe613","Type":"ContainerDied","Data":"f75b793fa7a8fa638c746656a34aafcf67f449119cc5beb64d5b0d6054ef7320"} Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.171969 4881 scope.go:117] "RemoveContainer" containerID="ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.180113 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5926a818-11da-4b6b-bae0-79e6d9e10728","Type":"ContainerStarted","Data":"bc4b878932d74665a9e8184d3f6d1985e6b6477d872a1d17b86a4fcb8439604e"} Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.180770 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5926a818-11da-4b6b-bae0-79e6d9e10728","Type":"ContainerStarted","Data":"2e7b045b897dc331a89c4051f48a735168a1a248aad4092aef521f1e6ac87e3c"} Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.184321 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerStarted","Data":"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516"} Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.198614 4881 scope.go:117] "RemoveContainer" containerID="14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.217664 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.217629894 podStartE2EDuration="3.217629894s" podCreationTimestamp="2026-01-21 11:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:51.211233427 +0000 UTC m=+1618.471189916" watchObservedRunningTime="2026-01-21 11:23:51.217629894 +0000 UTC m=+1618.477586363" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.237474 4881 scope.go:117] "RemoveContainer" containerID="ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0" Jan 21 11:23:51 crc kubenswrapper[4881]: E0121 11:23:51.240942 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0\": container with ID starting with ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0 not found: ID does not exist" containerID="ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.241145 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0"} err="failed to get container status \"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0\": rpc error: code = NotFound desc = could not find container \"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0\": container with ID starting with ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0 not found: ID does not exist" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.241277 4881 scope.go:117] "RemoveContainer" containerID="14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f" Jan 21 11:23:51 crc kubenswrapper[4881]: E0121 11:23:51.241664 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f\": container with ID starting with 14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f not found: ID does not exist" containerID="14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.241693 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f"} err="failed to get container status \"14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f\": rpc error: code = NotFound desc = could not find container \"14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f\": container with ID starting with 14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f not found: ID does not exist" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.277066 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.289670 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.352328 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" path="/var/lib/kubelet/pods/859758f9-0dc2-4397-a75a-b098eaabe613/volumes" Jan 21 11:23:52 crc kubenswrapper[4881]: I0121 11:23:52.400862 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5926a818-11da-4b6b-bae0-79e6d9e10728","Type":"ContainerStarted","Data":"e94b267bf0b2818197fb779d251f358e2b25747ccdf47395bec37b9e7404205b"} Jan 21 11:23:53 crc kubenswrapper[4881]: I0121 11:23:53.170351 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:53 crc kubenswrapper[4881]: I0121 11:23:53.246559 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:53 crc kubenswrapper[4881]: I0121 11:23:53.413109 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5926a818-11da-4b6b-bae0-79e6d9e10728","Type":"ContainerStarted","Data":"f81308efcf994beb460b7755557a1bb954ff571ad24313dfef76a4e4edac553f"} Jan 21 11:23:53 crc kubenswrapper[4881]: I0121 11:23:53.444768 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3433578 podStartE2EDuration="5.444742199s" podCreationTimestamp="2026-01-21 11:23:48 +0000 UTC" firstStartedPulling="2026-01-21 11:23:49.875114309 +0000 UTC m=+1617.135070768" lastFinishedPulling="2026-01-21 11:23:52.976498698 +0000 UTC m=+1620.236455167" observedRunningTime="2026-01-21 11:23:53.434583889 +0000 UTC m=+1620.694540378" watchObservedRunningTime="2026-01-21 11:23:53.444742199 +0000 UTC m=+1620.704698668" Jan 21 11:23:53 crc kubenswrapper[4881]: I0121 11:23:53.819145 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:54 crc kubenswrapper[4881]: I0121 11:23:54.422646 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vpxn7" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="registry-server" containerID="cri-o://c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6" gracePeriod=2 Jan 21 11:23:54 crc kubenswrapper[4881]: I0121 11:23:54.423060 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.489446 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.498937 4881 generic.go:334] "Generic (PLEG): container finished" podID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerID="c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6" exitCode=0 Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.500296 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerDied","Data":"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6"} Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.500337 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerDied","Data":"1de739443c6dfd6b37749b58394c2360dea5377c680c0b8dae6cbb306ba43ef6"} Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.500362 4881 scope.go:117] "RemoveContainer" containerID="c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.539985 4881 scope.go:117] "RemoveContainer" containerID="fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.584807 4881 scope.go:117] "RemoveContainer" containerID="839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.593888 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities\") pod \"52706c95-5c29-44cb-bc9d-2873d3a4d437\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.594059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content\") pod \"52706c95-5c29-44cb-bc9d-2873d3a4d437\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.594123 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkwt4\" (UniqueName: \"kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4\") pod \"52706c95-5c29-44cb-bc9d-2873d3a4d437\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.595159 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities" (OuterVolumeSpecName: "utilities") pod "52706c95-5c29-44cb-bc9d-2873d3a4d437" (UID: "52706c95-5c29-44cb-bc9d-2873d3a4d437"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.624113 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4" (OuterVolumeSpecName: "kube-api-access-gkwt4") pod "52706c95-5c29-44cb-bc9d-2873d3a4d437" (UID: "52706c95-5c29-44cb-bc9d-2873d3a4d437"). InnerVolumeSpecName "kube-api-access-gkwt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.629954 4881 scope.go:117] "RemoveContainer" containerID="c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6" Jan 21 11:23:55 crc kubenswrapper[4881]: E0121 11:23:55.631412 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6\": container with ID starting with c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6 not found: ID does not exist" containerID="c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.631455 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6"} err="failed to get container status \"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6\": rpc error: code = NotFound desc = could not find container \"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6\": container with ID starting with c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6 not found: ID does not exist" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.631481 4881 scope.go:117] "RemoveContainer" containerID="fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12" Jan 21 11:23:55 crc kubenswrapper[4881]: E0121 11:23:55.632984 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12\": container with ID starting with fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12 not found: ID does not exist" containerID="fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.633009 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12"} err="failed to get container status \"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12\": rpc error: code = NotFound desc = could not find container \"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12\": container with ID starting with fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12 not found: ID does not exist" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.633024 4881 scope.go:117] "RemoveContainer" containerID="839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7" Jan 21 11:23:55 crc kubenswrapper[4881]: E0121 11:23:55.636946 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7\": container with ID starting with 839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7 not found: ID does not exist" containerID="839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.636999 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7"} err="failed to get container status \"839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7\": rpc error: code = NotFound desc = could not find container \"839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7\": container with ID starting with 839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7 not found: ID does not exist" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.652269 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52706c95-5c29-44cb-bc9d-2873d3a4d437" (UID: "52706c95-5c29-44cb-bc9d-2873d3a4d437"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.697621 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.697695 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkwt4\" (UniqueName: \"kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.697719 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:56 crc kubenswrapper[4881]: I0121 11:23:56.510457 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:56 crc kubenswrapper[4881]: I0121 11:23:56.513051 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" containerID="62b5fd9972946ab2305558cba9c0d54f5b29b725654cb25337e61434a431d9ea" exitCode=0 Jan 21 11:23:56 crc kubenswrapper[4881]: I0121 11:23:56.513094 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bdc49" event={"ID":"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a","Type":"ContainerDied","Data":"62b5fd9972946ab2305558cba9c0d54f5b29b725654cb25337e61434a431d9ea"} Jan 21 11:23:56 crc kubenswrapper[4881]: I0121 11:23:56.553155 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:56 crc kubenswrapper[4881]: I0121 11:23:56.561896 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:57 crc kubenswrapper[4881]: I0121 11:23:57.342926 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" path="/var/lib/kubelet/pods/52706c95-5c29-44cb-bc9d-2873d3a4d437/volumes" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.095056 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.162292 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts\") pod \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.162553 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle\") pod \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.162649 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvcfc\" (UniqueName: \"kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc\") pod \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.162850 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data\") pod \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.174137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc" (OuterVolumeSpecName: "kube-api-access-qvcfc") pod "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" (UID: "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a"). InnerVolumeSpecName "kube-api-access-qvcfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.182372 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts" (OuterVolumeSpecName: "scripts") pod "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" (UID: "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.199027 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data" (OuterVolumeSpecName: "config-data") pod "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" (UID: "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.203943 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" (UID: "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.264400 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.264448 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.264462 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.264476 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvcfc\" (UniqueName: \"kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.634774 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.634861 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.682909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bdc49" event={"ID":"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a","Type":"ContainerDied","Data":"319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec"} Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.682956 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.683023 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.847414 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.847677 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-log" containerID="cri-o://80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25" gracePeriod=30 Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.847810 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-api" containerID="cri-o://5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516" gracePeriod=30 Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.853169 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": EOF" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.853243 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": EOF" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.870878 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.871114 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" containerName="nova-scheduler-scheduler" containerID="cri-o://e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070" gracePeriod=30 Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.939638 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.940032 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-log" containerID="cri-o://5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0" gracePeriod=30 Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.940045 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-metadata" containerID="cri-o://77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21" gracePeriod=30 Jan 21 11:23:59 crc kubenswrapper[4881]: E0121 11:23:59.275365 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e3b0813_d7bc_4e2e_aa18_fe1e00c75f52.slice/crio-conmon-5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.694440 4881 generic.go:334] "Generic (PLEG): container finished" podID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerID="80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25" exitCode=143 Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.694538 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerDied","Data":"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25"} Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.697699 4881 generic.go:334] "Generic (PLEG): container finished" podID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerID="5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0" exitCode=143 Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.697750 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerDied","Data":"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0"} Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.851011 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.851070 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.544600 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.693215 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.709160 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs\") pod \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.709272 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle\") pod \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.709351 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data\") pod \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.709495 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs\") pod \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.709895 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmm59\" (UniqueName: \"kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59\") pod \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.712015 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs" (OuterVolumeSpecName: "logs") pod "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" (UID: "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.719288 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59" (OuterVolumeSpecName: "kube-api-access-xmm59") pod "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" (UID: "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52"). InnerVolumeSpecName "kube-api-access-xmm59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.720226 4881 generic.go:334] "Generic (PLEG): container finished" podID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" containerID="e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070" exitCode=0 Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.720329 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f1fb00c-903a-48c9-95e5-8ad34c731f41","Type":"ContainerDied","Data":"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070"} Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.720366 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f1fb00c-903a-48c9-95e5-8ad34c731f41","Type":"ContainerDied","Data":"b3157e678fa44dfdf1c50a29c3af5b7c20661b982fcfdccdd420bdba43c8cf36"} Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.720389 4881 scope.go:117] "RemoveContainer" containerID="e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.720423 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.732709 4881 generic.go:334] "Generic (PLEG): container finished" podID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerID="77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21" exitCode=0 Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.732759 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerDied","Data":"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21"} Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.732812 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerDied","Data":"94be8c422811e4e8ba1078eb2e0e3d71d40e6f5e6c07d283df8a7544b7b7a114"} Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.732881 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.785970 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data" (OuterVolumeSpecName: "config-data") pod "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" (UID: "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.787243 4881 scope.go:117] "RemoveContainer" containerID="e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070" Jan 21 11:24:01 crc kubenswrapper[4881]: E0121 11:24:01.788271 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070\": container with ID starting with e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070 not found: ID does not exist" containerID="e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.788335 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070"} err="failed to get container status \"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070\": rpc error: code = NotFound desc = could not find container \"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070\": container with ID starting with e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070 not found: ID does not exist" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.788358 4881 scope.go:117] "RemoveContainer" containerID="77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.792986 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" (UID: "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.812476 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle\") pod \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.812607 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpprt\" (UniqueName: \"kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt\") pod \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.812734 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data\") pod \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.813378 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.813413 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.813430 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.813527 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmm59\" (UniqueName: \"kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.818008 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt" (OuterVolumeSpecName: "kube-api-access-zpprt") pod "0f1fb00c-903a-48c9-95e5-8ad34c731f41" (UID: "0f1fb00c-903a-48c9-95e5-8ad34c731f41"). InnerVolumeSpecName "kube-api-access-zpprt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.820619 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" (UID: "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.832277 4881 scope.go:117] "RemoveContainer" containerID="5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.859197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data" (OuterVolumeSpecName: "config-data") pod "0f1fb00c-903a-48c9-95e5-8ad34c731f41" (UID: "0f1fb00c-903a-48c9-95e5-8ad34c731f41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.871134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f1fb00c-903a-48c9-95e5-8ad34c731f41" (UID: "0f1fb00c-903a-48c9-95e5-8ad34c731f41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.875118 4881 scope.go:117] "RemoveContainer" containerID="77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21" Jan 21 11:24:01 crc kubenswrapper[4881]: E0121 11:24:01.875726 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21\": container with ID starting with 77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21 not found: ID does not exist" containerID="77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.875768 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21"} err="failed to get container status \"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21\": rpc error: code = NotFound desc = could not find container \"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21\": container with ID starting with 77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21 not found: ID does not exist" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.875817 4881 scope.go:117] "RemoveContainer" containerID="5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0" Jan 21 11:24:01 crc kubenswrapper[4881]: E0121 11:24:01.879668 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0\": container with ID starting with 5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0 not found: ID does not exist" containerID="5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.879925 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0"} err="failed to get container status \"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0\": rpc error: code = NotFound desc = could not find container \"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0\": container with ID starting with 5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0 not found: ID does not exist" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.915839 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpprt\" (UniqueName: \"kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.915886 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.915905 4881 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.915916 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.068374 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.082026 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.100077 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.139252 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="extract-content" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.139442 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="extract-content" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.139548 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="dnsmasq-dns" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.139612 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="dnsmasq-dns" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.139723 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-log" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.139815 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-log" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.139908 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="registry-server" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.139984 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="registry-server" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.140301 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" containerName="nova-manage" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.140391 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" containerName="nova-manage" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.140471 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="extract-utilities" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.140557 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="extract-utilities" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.140662 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" containerName="nova-scheduler-scheduler" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.140723 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" containerName="nova-scheduler-scheduler" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.140825 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="init" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.145157 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="init" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.145532 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-metadata" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.145628 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-metadata" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.148357 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="registry-server" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.149073 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" containerName="nova-manage" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.149261 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" containerName="nova-scheduler-scheduler" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.149365 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="dnsmasq-dns" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.149462 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-metadata" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.149552 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-log" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.152023 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.155640 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.166918 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.206403 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.263721 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.347859 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthnm\" (UniqueName: \"kubernetes.io/projected/6f6e9d1b-902e-450b-8202-337c04c265ba-kube-api-access-vthnm\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.347985 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.348071 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-config-data\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.636713 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vthnm\" (UniqueName: \"kubernetes.io/projected/6f6e9d1b-902e-450b-8202-337c04c265ba-kube-api-access-vthnm\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.636855 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.636932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-config-data\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.656067 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.657184 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-config-data\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.668109 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.670654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.675630 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.678602 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.679055 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.709272 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vthnm\" (UniqueName: \"kubernetes.io/projected/6f6e9d1b-902e-450b-8202-337c04c265ba-kube-api-access-vthnm\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.742374 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-logs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.742616 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.742693 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-config-data\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.742814 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.742939 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xjtc\" (UniqueName: \"kubernetes.io/projected/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-kube-api-access-7xjtc\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.814966 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.849297 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xjtc\" (UniqueName: \"kubernetes.io/projected/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-kube-api-access-7xjtc\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.849421 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-logs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.849516 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.849555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-config-data\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.849604 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.851034 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-logs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.860449 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.863709 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.885550 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-config-data\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.898694 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xjtc\" (UniqueName: \"kubernetes.io/projected/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-kube-api-access-7xjtc\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.103091 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.356569 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" path="/var/lib/kubelet/pods/0f1fb00c-903a-48c9-95e5-8ad34c731f41/volumes" Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.357703 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" path="/var/lib/kubelet/pods/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52/volumes" Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.572278 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.709166 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.797674 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba03e9fe-3ad6-4c52-bde7-bd41fca63834","Type":"ContainerStarted","Data":"fe53cb2b73cf131ba87702f82293ec55e430d03c07c71539649567f45f53874f"} Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.799339 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f6e9d1b-902e-450b-8202-337c04c265ba","Type":"ContainerStarted","Data":"cd96095a65ce65b2d4398d0e24880f414fefff1c1599cbf11f9f33b12e6a1147"} Jan 21 11:24:05 crc kubenswrapper[4881]: I0121 11:24:05.140166 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba03e9fe-3ad6-4c52-bde7-bd41fca63834","Type":"ContainerStarted","Data":"63d974af0b35962ab93c677bcb1af29aa9625d09e0c3792308c7143381283bc1"} Jan 21 11:24:05 crc kubenswrapper[4881]: I0121 11:24:05.143826 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f6e9d1b-902e-450b-8202-337c04c265ba","Type":"ContainerStarted","Data":"6e4c353ef2b04f1523052293a5ef253ea031d72f0dc74ed199971d7c3de6e601"} Jan 21 11:24:05 crc kubenswrapper[4881]: I0121 11:24:05.176576 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.176539222 podStartE2EDuration="3.176539222s" podCreationTimestamp="2026-01-21 11:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:24:05.161932194 +0000 UTC m=+1632.421888673" watchObservedRunningTime="2026-01-21 11:24:05.176539222 +0000 UTC m=+1632.436495701" Jan 21 11:24:06 crc kubenswrapper[4881]: I0121 11:24:06.922354 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080120 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080218 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57mlh\" (UniqueName: \"kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080336 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080361 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080419 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080448 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.081578 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs" (OuterVolumeSpecName: "logs") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.104232 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh" (OuterVolumeSpecName: "kube-api-access-57mlh") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "kube-api-access-57mlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.178962 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.186497 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.186537 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.186547 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57mlh\" (UniqueName: \"kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200096 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data" (OuterVolumeSpecName: "config-data") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200285 4881 generic.go:334] "Generic (PLEG): container finished" podID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerID="5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516" exitCode=0 Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200421 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200554 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerDied","Data":"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516"} Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200613 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerDied","Data":"78fa7e5c3484fc7a90c022f360abd4837962f6679c1a08c1b9fdb22f193c9f13"} Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200636 4881 scope.go:117] "RemoveContainer" containerID="5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200611 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.226932 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.274428 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba03e9fe-3ad6-4c52-bde7-bd41fca63834","Type":"ContainerStarted","Data":"331e9ee82d9defd168492b00a085b92acc44c562d368709fdd82fedce4f5fc8b"} Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.300632 4881 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.300675 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.300692 4881 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.794777 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.794751427 podStartE2EDuration="5.794751427s" podCreationTimestamp="2026-01-21 11:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:24:07.759363439 +0000 UTC m=+1635.019319918" watchObservedRunningTime="2026-01-21 11:24:07.794751427 +0000 UTC m=+1635.054707896" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.824457 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.824500 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.824515 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.867897 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:24:07 crc kubenswrapper[4881]: E0121 11:24:07.868394 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-log" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.868417 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-log" Jan 21 11:24:07 crc kubenswrapper[4881]: E0121 11:24:07.868444 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-api" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.868450 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-api" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.868654 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-log" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.868678 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-api" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.869946 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.875221 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.875442 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.875596 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.887269 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.916885 4881 scope.go:117] "RemoveContainer" containerID="80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.965811 4881 scope.go:117] "RemoveContainer" containerID="5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516" Jan 21 11:24:07 crc kubenswrapper[4881]: E0121 11:24:07.966630 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516\": container with ID starting with 5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516 not found: ID does not exist" containerID="5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.966671 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516"} err="failed to get container status \"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516\": rpc error: code = NotFound desc = could not find container \"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516\": container with ID starting with 5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516 not found: ID does not exist" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.966815 4881 scope.go:117] "RemoveContainer" containerID="80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25" Jan 21 11:24:07 crc kubenswrapper[4881]: E0121 11:24:07.967056 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25\": container with ID starting with 80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25 not found: ID does not exist" containerID="80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.967084 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25"} err="failed to get container status \"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25\": rpc error: code = NotFound desc = could not find container \"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25\": container with ID starting with 80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25 not found: ID does not exist" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022131 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-config-data\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022460 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9g7s\" (UniqueName: \"kubernetes.io/projected/1188227a-462c-4c61-ae6e-96b55ffacd71-kube-api-access-q9g7s\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022515 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-public-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022549 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1188227a-462c-4c61-ae6e-96b55ffacd71-logs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022605 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022681 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.103944 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.104006 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9g7s\" (UniqueName: \"kubernetes.io/projected/1188227a-462c-4c61-ae6e-96b55ffacd71-kube-api-access-q9g7s\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125273 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-public-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125295 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1188227a-462c-4c61-ae6e-96b55ffacd71-logs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125326 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125363 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125394 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-config-data\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.126531 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1188227a-462c-4c61-ae6e-96b55ffacd71-logs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.130500 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-config-data\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.138339 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-public-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.138457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.139939 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.145918 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9g7s\" (UniqueName: \"kubernetes.io/projected/1188227a-462c-4c61-ae6e-96b55ffacd71-kube-api-access-q9g7s\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.206201 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.713583 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:24:09 crc kubenswrapper[4881]: I0121 11:24:09.327601 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" path="/var/lib/kubelet/pods/da2439be-4ed2-43a2-adbe-dd4afaa012f3/volumes" Jan 21 11:24:09 crc kubenswrapper[4881]: I0121 11:24:09.340570 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1188227a-462c-4c61-ae6e-96b55ffacd71","Type":"ContainerStarted","Data":"187947b5be610e4479183060e11dc95bdb009b7bf23c7effe8224cce0ad8dde2"} Jan 21 11:24:09 crc kubenswrapper[4881]: I0121 11:24:09.340666 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1188227a-462c-4c61-ae6e-96b55ffacd71","Type":"ContainerStarted","Data":"cdb49b5096b541660e1071519ec1a626dc191064b2d9b0bfbd67bf05ca6786b2"} Jan 21 11:24:09 crc kubenswrapper[4881]: I0121 11:24:09.340687 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1188227a-462c-4c61-ae6e-96b55ffacd71","Type":"ContainerStarted","Data":"43a8184c7c3fcc42843ef748dac7eaa2aeb72b11c53692db3d99c6d69892dd0a"} Jan 21 11:24:09 crc kubenswrapper[4881]: I0121 11:24:09.397900 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.397876045 podStartE2EDuration="2.397876045s" podCreationTimestamp="2026-01-21 11:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:24:09.367896461 +0000 UTC m=+1636.627852950" watchObservedRunningTime="2026-01-21 11:24:09.397876045 +0000 UTC m=+1636.657832514" Jan 21 11:24:12 crc kubenswrapper[4881]: I0121 11:24:12.816625 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 11:24:12 crc kubenswrapper[4881]: I0121 11:24:12.847512 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 11:24:13 crc kubenswrapper[4881]: I0121 11:24:13.104211 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:24:13 crc kubenswrapper[4881]: I0121 11:24:13.104289 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:24:13 crc kubenswrapper[4881]: I0121 11:24:13.581764 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 11:24:14 crc kubenswrapper[4881]: I0121 11:24:14.120020 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ba03e9fe-3ad6-4c52-bde7-bd41fca63834" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:24:14 crc kubenswrapper[4881]: I0121 11:24:14.120020 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ba03e9fe-3ad6-4c52-bde7-bd41fca63834" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:24:18 crc kubenswrapper[4881]: I0121 11:24:18.206963 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:24:18 crc kubenswrapper[4881]: I0121 11:24:18.207488 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:24:18 crc kubenswrapper[4881]: I0121 11:24:18.902167 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 11:24:19 crc kubenswrapper[4881]: I0121 11:24:19.219081 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1188227a-462c-4c61-ae6e-96b55ffacd71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:24:19 crc kubenswrapper[4881]: I0121 11:24:19.219108 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1188227a-462c-4c61-ae6e-96b55ffacd71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:24:23 crc kubenswrapper[4881]: I0121 11:24:23.110518 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:24:23 crc kubenswrapper[4881]: I0121 11:24:23.111416 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:24:23 crc kubenswrapper[4881]: I0121 11:24:23.117077 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:24:23 crc kubenswrapper[4881]: I0121 11:24:23.663473 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:24:27 crc kubenswrapper[4881]: I0121 11:24:27.500264 4881 scope.go:117] "RemoveContainer" containerID="4b32abc6871e628e297cbe463288501e5adf49f03da08854de77bfb91714eedb" Jan 21 11:24:27 crc kubenswrapper[4881]: I0121 11:24:27.537850 4881 scope.go:117] "RemoveContainer" containerID="ce6a2cc0cc6379a9f8ed18cfa5d64954b4b7fdd11d37db77a73b2856418b87db" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.216495 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.217011 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.220464 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.227622 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.733520 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.743225 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:24:29 crc kubenswrapper[4881]: I0121 11:24:29.851267 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:24:29 crc kubenswrapper[4881]: I0121 11:24:29.852096 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:24:29 crc kubenswrapper[4881]: I0121 11:24:29.852177 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:24:29 crc kubenswrapper[4881]: I0121 11:24:29.853485 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:24:29 crc kubenswrapper[4881]: I0121 11:24:29.853570 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" gracePeriod=600 Jan 21 11:24:29 crc kubenswrapper[4881]: E0121 11:24:29.985495 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:24:30 crc kubenswrapper[4881]: I0121 11:24:30.757577 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" exitCode=0 Jan 21 11:24:30 crc kubenswrapper[4881]: I0121 11:24:30.757665 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca"} Jan 21 11:24:30 crc kubenswrapper[4881]: I0121 11:24:30.757732 4881 scope.go:117] "RemoveContainer" containerID="7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a" Jan 21 11:24:30 crc kubenswrapper[4881]: I0121 11:24:30.758619 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:24:30 crc kubenswrapper[4881]: E0121 11:24:30.758874 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:24:38 crc kubenswrapper[4881]: I0121 11:24:38.325420 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:40 crc kubenswrapper[4881]: I0121 11:24:40.222871 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:42 crc kubenswrapper[4881]: I0121 11:24:42.196370 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" containerID="cri-o://8a0e4e5a99ef920688a0d7a6463ea9c0a7db6ff987fcbf667df0b4f98b3356bf" gracePeriod=604797 Jan 21 11:24:43 crc kubenswrapper[4881]: I0121 11:24:43.625139 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" containerID="cri-o://023f57aba22657f38c9822a9fcfbabd9eb5513e10f1d131208e251a7df31b2a0" gracePeriod=604797 Jan 21 11:24:43 crc kubenswrapper[4881]: I0121 11:24:43.949806 4881 generic.go:334] "Generic (PLEG): container finished" podID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerID="8a0e4e5a99ef920688a0d7a6463ea9c0a7db6ff987fcbf667df0b4f98b3356bf" exitCode=0 Jan 21 11:24:43 crc kubenswrapper[4881]: I0121 11:24:43.949888 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerDied","Data":"8a0e4e5a99ef920688a0d7a6463ea9c0a7db6ff987fcbf667df0b4f98b3356bf"} Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.091148 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.399427 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.399588 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.399716 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.400578 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.414876 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415260 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415359 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjgnd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415384 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415492 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415559 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415595 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415636 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.416571 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.416596 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.419057 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info" (OuterVolumeSpecName: "pod-info") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.422615 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.423957 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.450287 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.452568 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.470614 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.471697 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd" (OuterVolumeSpecName: "kube-api-access-tjgnd") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "kube-api-access-tjgnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.507027 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data" (OuterVolumeSpecName: "config-data") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519846 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519881 4881 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519895 4881 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519908 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjgnd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519917 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519925 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519934 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519942 4881 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.610892 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf" (OuterVolumeSpecName: "server-conf") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.622520 4881 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.630344 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.724159 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.997611 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.998089 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerDied","Data":"0407be0eb8897677e11cb341e14b52b133b745f624185504d845fdccc7ff50c4"} Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.998129 4881 scope.go:117] "RemoveContainer" containerID="8a0e4e5a99ef920688a0d7a6463ea9c0a7db6ff987fcbf667df0b4f98b3356bf" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.005651 4881 generic.go:334] "Generic (PLEG): container finished" podID="078c2368-b247-49d4-8723-fd93918e99b1" containerID="023f57aba22657f38c9822a9fcfbabd9eb5513e10f1d131208e251a7df31b2a0" exitCode=0 Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.005704 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerDied","Data":"023f57aba22657f38c9822a9fcfbabd9eb5513e10f1d131208e251a7df31b2a0"} Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.046521 4881 scope.go:117] "RemoveContainer" containerID="b30e547e2506fcebf2f8ac627808ad3f0382510a160b2079a570164ee838adfc" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.061965 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.089711 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.110419 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:45 crc kubenswrapper[4881]: E0121 11:24:45.111048 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="setup-container" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.111068 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="setup-container" Jan 21 11:24:45 crc kubenswrapper[4881]: E0121 11:24:45.111092 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.111098 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.111324 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.112503 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116095 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116403 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116449 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x9qrf" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116517 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116449 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116860 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.117018 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.123765 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236243 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236298 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236323 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35a19b99-eed0-4383-bea5-cf43d84a5a3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236496 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mrv\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-kube-api-access-p6mrv\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236808 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236947 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.237013 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.237049 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.237077 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35a19b99-eed0-4383-bea5-cf43d84a5a3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.237106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.237138 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.315903 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.321988 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" path="/var/lib/kubelet/pods/f7e90972-9be1-4d3e-852e-e7f7df6e6623/volumes" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342126 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342226 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342270 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35a19b99-eed0-4383-bea5-cf43d84a5a3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342331 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342436 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342474 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35a19b99-eed0-4383-bea5-cf43d84a5a3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342537 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6mrv\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-kube-api-access-p6mrv\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342688 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342798 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.344116 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.344780 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.345693 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.348642 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.348891 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.349460 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.350515 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.350934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35a19b99-eed0-4383-bea5-cf43d84a5a3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.362094 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35a19b99-eed0-4383-bea5-cf43d84a5a3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.403104 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6mrv\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-kube-api-access-p6mrv\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.442105 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.444930 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.445077 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.445203 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.445553 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.446486 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmd5s\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.446563 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447433 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447551 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447583 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447626 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447655 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447682 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.448288 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.448307 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.449166 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info" (OuterVolumeSpecName: "pod-info") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.449688 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.453844 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s" (OuterVolumeSpecName: "kube-api-access-bmd5s") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "kube-api-access-bmd5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.456619 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.459464 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.461668 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.461996 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.497819 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data" (OuterVolumeSpecName: "config-data") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.517434 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf" (OuterVolumeSpecName: "server-conf") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549543 4881 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549572 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549581 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549589 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549598 4881 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549675 4881 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549699 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549709 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmd5s\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549718 4881 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.581493 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.605072 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.653324 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.653375 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.938548 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.018861 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"35a19b99-eed0-4383-bea5-cf43d84a5a3e","Type":"ContainerStarted","Data":"b61ec4ecd31391566c8185e90cc9bde05f33548160425c605a2a9789abeeafd4"} Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.021778 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerDied","Data":"cb426b0ea6a917959cdcac6b6915e9a598cb2f51672af4e37994bc672acc84c9"} Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.021855 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.022104 4881 scope.go:117] "RemoveContainer" containerID="023f57aba22657f38c9822a9fcfbabd9eb5513e10f1d131208e251a7df31b2a0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.111051 4881 scope.go:117] "RemoveContainer" containerID="26f697deade0e9783aed3c09129f2f0589fbb10b53e3501c212b7fcc5f5b5d86" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.143236 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.157714 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.175967 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:46 crc kubenswrapper[4881]: E0121 11:24:46.177045 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.177180 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" Jan 21 11:24:46 crc kubenswrapper[4881]: E0121 11:24:46.177314 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="setup-container" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.177411 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="setup-container" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.177811 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.183004 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189228 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189307 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189522 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189635 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189749 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189893 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.190489 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tt7xn" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.201159 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.311807 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:24:46 crc kubenswrapper[4881]: E0121 11:24:46.312183 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674204 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de7ea801-d184-48cf-a602-c82ff20892ff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674307 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674353 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674376 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674447 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674464 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de7ea801-d184-48cf-a602-c82ff20892ff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674513 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674535 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cftw\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-kube-api-access-6cftw\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674708 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.776566 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.776673 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.777722 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.777754 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.777922 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.778769 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.778936 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de7ea801-d184-48cf-a602-c82ff20892ff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.778987 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.779022 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cftw\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-kube-api-access-6cftw\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.779192 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.779383 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780024 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780149 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780210 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de7ea801-d184-48cf-a602-c82ff20892ff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780574 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780600 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780923 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.784389 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.784774 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de7ea801-d184-48cf-a602-c82ff20892ff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.785293 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.793158 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de7ea801-d184-48cf-a602-c82ff20892ff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.798454 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cftw\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-kube-api-access-6cftw\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.824717 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:47 crc kubenswrapper[4881]: I0121 11:24:47.120564 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:47 crc kubenswrapper[4881]: I0121 11:24:47.336079 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078c2368-b247-49d4-8723-fd93918e99b1" path="/var/lib/kubelet/pods/078c2368-b247-49d4-8723-fd93918e99b1/volumes" Jan 21 11:24:47 crc kubenswrapper[4881]: I0121 11:24:47.614161 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:48 crc kubenswrapper[4881]: I0121 11:24:48.055247 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de7ea801-d184-48cf-a602-c82ff20892ff","Type":"ContainerStarted","Data":"dc4b5e0e4224dd4ec733e65a2e91278b819f3625a2a848cb9582dcac2e68f27e"} Jan 21 11:24:49 crc kubenswrapper[4881]: I0121 11:24:49.068063 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"35a19b99-eed0-4383-bea5-cf43d84a5a3e","Type":"ContainerStarted","Data":"634428d31431025fdccf3934e18d58dc33fc9e53d8e3c10e3fc62735d4af9040"} Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.135930 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.139099 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.141321 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.156947 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.273381 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.273775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.273866 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.273940 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.273979 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.274057 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.274112 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvnw9\" (UniqueName: \"kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376216 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376372 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376465 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376510 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376596 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376652 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvnw9\" (UniqueName: \"kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.377925 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.378008 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.378513 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.378851 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.379201 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.379250 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.380123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.403358 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvnw9\" (UniqueName: \"kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.461625 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:57 crc kubenswrapper[4881]: I0121 11:24:57.186251 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de7ea801-d184-48cf-a602-c82ff20892ff","Type":"ContainerStarted","Data":"8e68b25c764b9e0b867a6f82b7e2e448c02c2d37267bc95d906ed96df4996747"} Jan 21 11:24:57 crc kubenswrapper[4881]: I0121 11:24:57.228162 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:24:58 crc kubenswrapper[4881]: I0121 11:24:58.195653 4881 generic.go:334] "Generic (PLEG): container finished" podID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerID="65cfd2dd1128a88bc70d491463496b79cfb2dcc5abc049d917dd83ad5f45761a" exitCode=0 Jan 21 11:24:58 crc kubenswrapper[4881]: I0121 11:24:58.195703 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" event={"ID":"ec2fab32-4eac-4a26-9ddb-40132e94976f","Type":"ContainerDied","Data":"65cfd2dd1128a88bc70d491463496b79cfb2dcc5abc049d917dd83ad5f45761a"} Jan 21 11:24:58 crc kubenswrapper[4881]: I0121 11:24:58.196216 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" event={"ID":"ec2fab32-4eac-4a26-9ddb-40132e94976f","Type":"ContainerStarted","Data":"81049c0c5e8e6d15434e36288df117ccffe86a12005f731fb0b39ecb31197cdc"} Jan 21 11:24:59 crc kubenswrapper[4881]: I0121 11:24:59.207846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" event={"ID":"ec2fab32-4eac-4a26-9ddb-40132e94976f","Type":"ContainerStarted","Data":"3ebc49c540ff95bec9f3779f43c3effaa601aed7e73346317b526874af0e6390"} Jan 21 11:24:59 crc kubenswrapper[4881]: I0121 11:24:59.208153 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:59 crc kubenswrapper[4881]: I0121 11:24:59.227705 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" podStartSLOduration=3.227685072 podStartE2EDuration="3.227685072s" podCreationTimestamp="2026-01-21 11:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:24:59.227081776 +0000 UTC m=+1686.487038265" watchObservedRunningTime="2026-01-21 11:24:59.227685072 +0000 UTC m=+1686.487641541" Jan 21 11:25:01 crc kubenswrapper[4881]: I0121 11:25:01.311440 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:25:01 crc kubenswrapper[4881]: E0121 11:25:01.313090 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.463995 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.531984 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.532289 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="dnsmasq-dns" containerID="cri-o://a807273d95c9864f3ecabade018dc0a91eb28a83bcfcbef9786d9473502a12a5" gracePeriod=10 Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.698092 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59596cff49-cpxcq"] Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.706068 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.729351 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59596cff49-cpxcq"] Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.797961 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-openstack-edpm-ipam\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798020 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-svc\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798089 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-config\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798133 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-nb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798168 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-sb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798239 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4x2d\" (UniqueName: \"kubernetes.io/projected/a08dbd57-125f-4ca2-b166-434068ee9432-kube-api-access-g4x2d\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798281 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-swift-storage-0\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.900860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-sb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.900972 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4x2d\" (UniqueName: \"kubernetes.io/projected/a08dbd57-125f-4ca2-b166-434068ee9432-kube-api-access-g4x2d\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.900993 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-swift-storage-0\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.901034 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-openstack-edpm-ipam\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.901058 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-svc\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.901116 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-config\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.901158 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-nb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902285 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-sb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902545 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-nb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902607 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-openstack-edpm-ipam\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902684 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-svc\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902770 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-config\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902774 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-swift-storage-0\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.926499 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4x2d\" (UniqueName: \"kubernetes.io/projected/a08dbd57-125f-4ca2-b166-434068ee9432-kube-api-access-g4x2d\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.046633 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.372360 4881 generic.go:334] "Generic (PLEG): container finished" podID="81dbec06-59d7-4c42-a558-910811fb3811" containerID="a807273d95c9864f3ecabade018dc0a91eb28a83bcfcbef9786d9473502a12a5" exitCode=0 Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.395109 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" event={"ID":"81dbec06-59d7-4c42-a558-910811fb3811","Type":"ContainerDied","Data":"a807273d95c9864f3ecabade018dc0a91eb28a83bcfcbef9786d9473502a12a5"} Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.395159 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" event={"ID":"81dbec06-59d7-4c42-a558-910811fb3811","Type":"ContainerDied","Data":"14e34995d6813b59d5fbddbd68a531e00edeb5c9ae370d72d56de9da156f7345"} Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.395172 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14e34995d6813b59d5fbddbd68a531e00edeb5c9ae370d72d56de9da156f7345" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.396633 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.546602 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.547493 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwg4c\" (UniqueName: \"kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.547714 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.547821 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.547875 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.547927 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.574682 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c" (OuterVolumeSpecName: "kube-api-access-lwg4c") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "kube-api-access-lwg4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.617631 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config" (OuterVolumeSpecName: "config") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.623136 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.637679 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.638904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.651247 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.651295 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.651312 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.651324 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.651336 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwg4c\" (UniqueName: \"kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.683474 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.753458 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.798821 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59596cff49-cpxcq"] Jan 21 11:25:07 crc kubenswrapper[4881]: W0121 11:25:07.803617 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda08dbd57_125f_4ca2_b166_434068ee9432.slice/crio-963f766e0019513a258fc50bf0d251df7fbc1e6635d9d8cab51e022c49eee27b WatchSource:0}: Error finding container 963f766e0019513a258fc50bf0d251df7fbc1e6635d9d8cab51e022c49eee27b: Status 404 returned error can't find the container with id 963f766e0019513a258fc50bf0d251df7fbc1e6635d9d8cab51e022c49eee27b Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.406987 4881 generic.go:334] "Generic (PLEG): container finished" podID="a08dbd57-125f-4ca2-b166-434068ee9432" containerID="bba85260be07f097ed4444f9ead41161f18f05f9b642a209ac057f05e683cd36" exitCode=0 Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.407529 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.407239 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" event={"ID":"a08dbd57-125f-4ca2-b166-434068ee9432","Type":"ContainerDied","Data":"bba85260be07f097ed4444f9ead41161f18f05f9b642a209ac057f05e683cd36"} Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.409412 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" event={"ID":"a08dbd57-125f-4ca2-b166-434068ee9432","Type":"ContainerStarted","Data":"963f766e0019513a258fc50bf0d251df7fbc1e6635d9d8cab51e022c49eee27b"} Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.505035 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.515911 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:25:09 crc kubenswrapper[4881]: I0121 11:25:09.325003 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81dbec06-59d7-4c42-a558-910811fb3811" path="/var/lib/kubelet/pods/81dbec06-59d7-4c42-a558-910811fb3811/volumes" Jan 21 11:25:09 crc kubenswrapper[4881]: I0121 11:25:09.423200 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" event={"ID":"a08dbd57-125f-4ca2-b166-434068ee9432","Type":"ContainerStarted","Data":"818f72e3c5f9d0f5c6e8c41d19fec30d6ec474a92c13b5e8032090ea9a66c126"} Jan 21 11:25:09 crc kubenswrapper[4881]: I0121 11:25:09.423620 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:09 crc kubenswrapper[4881]: I0121 11:25:09.467747 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" podStartSLOduration=3.467713663 podStartE2EDuration="3.467713663s" podCreationTimestamp="2026-01-21 11:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:25:09.456726164 +0000 UTC m=+1696.716682633" watchObservedRunningTime="2026-01-21 11:25:09.467713663 +0000 UTC m=+1696.727670152" Jan 21 11:25:13 crc kubenswrapper[4881]: I0121 11:25:13.317993 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:25:13 crc kubenswrapper[4881]: E0121 11:25:13.318728 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.048017 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.134906 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.135191 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="dnsmasq-dns" containerID="cri-o://3ebc49c540ff95bec9f3779f43c3effaa601aed7e73346317b526874af0e6390" gracePeriod=10 Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.558900 4881 generic.go:334] "Generic (PLEG): container finished" podID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerID="3ebc49c540ff95bec9f3779f43c3effaa601aed7e73346317b526874af0e6390" exitCode=0 Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.558981 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" event={"ID":"ec2fab32-4eac-4a26-9ddb-40132e94976f","Type":"ContainerDied","Data":"3ebc49c540ff95bec9f3779f43c3effaa601aed7e73346317b526874af0e6390"} Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.644395 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.707665 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.707981 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.708028 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.708109 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.708206 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvnw9\" (UniqueName: \"kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.708306 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.708338 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.715184 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9" (OuterVolumeSpecName: "kube-api-access-bvnw9") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "kube-api-access-bvnw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.779227 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config" (OuterVolumeSpecName: "config") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.781161 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.802041 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.802081 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.806806 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811360 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvnw9\" (UniqueName: \"kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811408 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811421 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811433 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811444 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811455 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.829480 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.914133 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.574381 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" event={"ID":"ec2fab32-4eac-4a26-9ddb-40132e94976f","Type":"ContainerDied","Data":"81049c0c5e8e6d15434e36288df117ccffe86a12005f731fb0b39ecb31197cdc"} Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.574713 4881 scope.go:117] "RemoveContainer" containerID="3ebc49c540ff95bec9f3779f43c3effaa601aed7e73346317b526874af0e6390" Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.574440 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.607309 4881 scope.go:117] "RemoveContainer" containerID="65cfd2dd1128a88bc70d491463496b79cfb2dcc5abc049d917dd83ad5f45761a" Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.613427 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.624007 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:25:19 crc kubenswrapper[4881]: I0121 11:25:19.328601 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" path="/var/lib/kubelet/pods/ec2fab32-4eac-4a26-9ddb-40132e94976f/volumes" Jan 21 11:25:20 crc kubenswrapper[4881]: I0121 11:25:20.599347 4881 generic.go:334] "Generic (PLEG): container finished" podID="35a19b99-eed0-4383-bea5-cf43d84a5a3e" containerID="634428d31431025fdccf3934e18d58dc33fc9e53d8e3c10e3fc62735d4af9040" exitCode=0 Jan 21 11:25:20 crc kubenswrapper[4881]: I0121 11:25:20.599490 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"35a19b99-eed0-4383-bea5-cf43d84a5a3e","Type":"ContainerDied","Data":"634428d31431025fdccf3934e18d58dc33fc9e53d8e3c10e3fc62735d4af9040"} Jan 21 11:25:21 crc kubenswrapper[4881]: I0121 11:25:21.614816 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"35a19b99-eed0-4383-bea5-cf43d84a5a3e","Type":"ContainerStarted","Data":"fe68fbf9120089c1e7cd6dc6a3d745261c371e91187628d27a7621185c38f5cd"} Jan 21 11:25:21 crc kubenswrapper[4881]: I0121 11:25:21.615309 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 11:25:21 crc kubenswrapper[4881]: I0121 11:25:21.647750 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.647723535 podStartE2EDuration="36.647723535s" podCreationTimestamp="2026-01-21 11:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:25:21.637674779 +0000 UTC m=+1708.897631248" watchObservedRunningTime="2026-01-21 11:25:21.647723535 +0000 UTC m=+1708.907680004" Jan 21 11:25:26 crc kubenswrapper[4881]: I0121 11:25:26.311427 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:25:26 crc kubenswrapper[4881]: E0121 11:25:26.312381 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:25:29 crc kubenswrapper[4881]: I0121 11:25:29.690981 4881 generic.go:334] "Generic (PLEG): container finished" podID="de7ea801-d184-48cf-a602-c82ff20892ff" containerID="8e68b25c764b9e0b867a6f82b7e2e448c02c2d37267bc95d906ed96df4996747" exitCode=0 Jan 21 11:25:29 crc kubenswrapper[4881]: I0121 11:25:29.691272 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de7ea801-d184-48cf-a602-c82ff20892ff","Type":"ContainerDied","Data":"8e68b25c764b9e0b867a6f82b7e2e448c02c2d37267bc95d906ed96df4996747"} Jan 21 11:25:30 crc kubenswrapper[4881]: I0121 11:25:30.704993 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de7ea801-d184-48cf-a602-c82ff20892ff","Type":"ContainerStarted","Data":"ec707c548b6f8c2a6983970dd435a8fafbf0658a06bfa5f5b4657e3f98f9908d"} Jan 21 11:25:30 crc kubenswrapper[4881]: I0121 11:25:30.705512 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:25:30 crc kubenswrapper[4881]: I0121 11:25:30.739863 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.73984155 podStartE2EDuration="44.73984155s" podCreationTimestamp="2026-01-21 11:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:25:30.732592923 +0000 UTC m=+1717.992549452" watchObservedRunningTime="2026-01-21 11:25:30.73984155 +0000 UTC m=+1717.999798029" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.467139 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.962663 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c"] Jan 21 11:25:35 crc kubenswrapper[4881]: E0121 11:25:35.963166 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="init" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963182 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="init" Jan 21 11:25:35 crc kubenswrapper[4881]: E0121 11:25:35.963194 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="init" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963201 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="init" Jan 21 11:25:35 crc kubenswrapper[4881]: E0121 11:25:35.963223 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963229 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: E0121 11:25:35.963244 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963249 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963425 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963445 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.964186 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.973651 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.973848 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.974182 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.974315 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.988463 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c"] Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.073296 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.075175 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.075252 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l5pl\" (UniqueName: \"kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.075330 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.177785 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.178064 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.178104 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l5pl\" (UniqueName: \"kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.178138 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.187496 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.187541 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.189129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.200364 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l5pl\" (UniqueName: \"kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.293226 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.966431 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c"] Jan 21 11:25:36 crc kubenswrapper[4881]: W0121 11:25:36.968561 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a9e212c_bc4b_4dae_9c97_cbc48686c8fc.slice/crio-e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a WatchSource:0}: Error finding container e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a: Status 404 returned error can't find the container with id e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.972890 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:25:37 crc kubenswrapper[4881]: I0121 11:25:37.778818 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" event={"ID":"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc","Type":"ContainerStarted","Data":"e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a"} Jan 21 11:25:40 crc kubenswrapper[4881]: I0121 11:25:40.311240 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:25:40 crc kubenswrapper[4881]: E0121 11:25:40.311987 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:25:47 crc kubenswrapper[4881]: I0121 11:25:47.124987 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:25:49 crc kubenswrapper[4881]: I0121 11:25:49.957717 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" event={"ID":"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc","Type":"ContainerStarted","Data":"45f878b3ab9ad3bdced1034ce00243ffdba515159045ff6c402974179b384bcb"} Jan 21 11:25:49 crc kubenswrapper[4881]: I0121 11:25:49.978763 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" podStartSLOduration=3.065983895 podStartE2EDuration="14.978742501s" podCreationTimestamp="2026-01-21 11:25:35 +0000 UTC" firstStartedPulling="2026-01-21 11:25:36.972590422 +0000 UTC m=+1724.232546891" lastFinishedPulling="2026-01-21 11:25:48.885349028 +0000 UTC m=+1736.145305497" observedRunningTime="2026-01-21 11:25:49.97215908 +0000 UTC m=+1737.232115559" watchObservedRunningTime="2026-01-21 11:25:49.978742501 +0000 UTC m=+1737.238698970" Jan 21 11:25:53 crc kubenswrapper[4881]: I0121 11:25:53.310431 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:25:53 crc kubenswrapper[4881]: E0121 11:25:53.312327 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:26:03 crc kubenswrapper[4881]: I0121 11:26:03.104036 4881 generic.go:334] "Generic (PLEG): container finished" podID="4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" containerID="45f878b3ab9ad3bdced1034ce00243ffdba515159045ff6c402974179b384bcb" exitCode=0 Jan 21 11:26:03 crc kubenswrapper[4881]: I0121 11:26:03.104208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" event={"ID":"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc","Type":"ContainerDied","Data":"45f878b3ab9ad3bdced1034ce00243ffdba515159045ff6c402974179b384bcb"} Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.637430 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.786021 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory\") pod \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.786149 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam\") pod \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.786247 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l5pl\" (UniqueName: \"kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl\") pod \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.786397 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle\") pod \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.792117 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" (UID: "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.793006 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl" (OuterVolumeSpecName: "kube-api-access-9l5pl") pod "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" (UID: "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc"). InnerVolumeSpecName "kube-api-access-9l5pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.818719 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory" (OuterVolumeSpecName: "inventory") pod "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" (UID: "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.889898 4881 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.889935 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.889968 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l5pl\" (UniqueName: \"kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.893075 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" (UID: "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.991117 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.147346 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" event={"ID":"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc","Type":"ContainerDied","Data":"e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a"} Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.147385 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.147410 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.217589 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk"] Jan 21 11:26:05 crc kubenswrapper[4881]: E0121 11:26:05.218097 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.218120 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.218323 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.219059 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.221359 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.222027 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.222771 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.223919 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.235226 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk"] Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.299280 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.300117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drqnn\" (UniqueName: \"kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.300325 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.403256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drqnn\" (UniqueName: \"kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.403381 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.403499 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.410086 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.410525 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.422995 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drqnn\" (UniqueName: \"kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.538966 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:06 crc kubenswrapper[4881]: I0121 11:26:06.165613 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk"] Jan 21 11:26:07 crc kubenswrapper[4881]: I0121 11:26:07.169381 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" event={"ID":"dd495475-04cc-47b2-ad0e-7e3b83917ece","Type":"ContainerStarted","Data":"6a245fb772e4935c1de8be83ad0500624a0c81034e16a0c1338a7e61426ac137"} Jan 21 11:26:07 crc kubenswrapper[4881]: I0121 11:26:07.169431 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" event={"ID":"dd495475-04cc-47b2-ad0e-7e3b83917ece","Type":"ContainerStarted","Data":"e47b110be76f9e83fffaaa8ac4df5ba04674f85999916283750b5ea0d29b4303"} Jan 21 11:26:07 crc kubenswrapper[4881]: I0121 11:26:07.193380 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" podStartSLOduration=1.715362059 podStartE2EDuration="2.193354958s" podCreationTimestamp="2026-01-21 11:26:05 +0000 UTC" firstStartedPulling="2026-01-21 11:26:06.160918558 +0000 UTC m=+1753.420875027" lastFinishedPulling="2026-01-21 11:26:06.638911447 +0000 UTC m=+1753.898867926" observedRunningTime="2026-01-21 11:26:07.184604923 +0000 UTC m=+1754.444561432" watchObservedRunningTime="2026-01-21 11:26:07.193354958 +0000 UTC m=+1754.453311437" Jan 21 11:26:07 crc kubenswrapper[4881]: I0121 11:26:07.313344 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:26:07 crc kubenswrapper[4881]: E0121 11:26:07.313673 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:26:10 crc kubenswrapper[4881]: I0121 11:26:10.201615 4881 generic.go:334] "Generic (PLEG): container finished" podID="dd495475-04cc-47b2-ad0e-7e3b83917ece" containerID="6a245fb772e4935c1de8be83ad0500624a0c81034e16a0c1338a7e61426ac137" exitCode=0 Jan 21 11:26:10 crc kubenswrapper[4881]: I0121 11:26:10.201685 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" event={"ID":"dd495475-04cc-47b2-ad0e-7e3b83917ece","Type":"ContainerDied","Data":"6a245fb772e4935c1de8be83ad0500624a0c81034e16a0c1338a7e61426ac137"} Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.658942 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.751965 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam\") pod \"dd495475-04cc-47b2-ad0e-7e3b83917ece\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.752115 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory\") pod \"dd495475-04cc-47b2-ad0e-7e3b83917ece\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.752265 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drqnn\" (UniqueName: \"kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn\") pod \"dd495475-04cc-47b2-ad0e-7e3b83917ece\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.764039 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn" (OuterVolumeSpecName: "kube-api-access-drqnn") pod "dd495475-04cc-47b2-ad0e-7e3b83917ece" (UID: "dd495475-04cc-47b2-ad0e-7e3b83917ece"). InnerVolumeSpecName "kube-api-access-drqnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.780839 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd495475-04cc-47b2-ad0e-7e3b83917ece" (UID: "dd495475-04cc-47b2-ad0e-7e3b83917ece"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.801418 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory" (OuterVolumeSpecName: "inventory") pod "dd495475-04cc-47b2-ad0e-7e3b83917ece" (UID: "dd495475-04cc-47b2-ad0e-7e3b83917ece"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.854696 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drqnn\" (UniqueName: \"kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.854746 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.854762 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.226661 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" event={"ID":"dd495475-04cc-47b2-ad0e-7e3b83917ece","Type":"ContainerDied","Data":"e47b110be76f9e83fffaaa8ac4df5ba04674f85999916283750b5ea0d29b4303"} Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.226716 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e47b110be76f9e83fffaaa8ac4df5ba04674f85999916283750b5ea0d29b4303" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.226758 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.314023 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5"] Jan 21 11:26:12 crc kubenswrapper[4881]: E0121 11:26:12.315084 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd495475-04cc-47b2-ad0e-7e3b83917ece" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.315114 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd495475-04cc-47b2-ad0e-7e3b83917ece" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.315457 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd495475-04cc-47b2-ad0e-7e3b83917ece" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.316628 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.319847 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.320198 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.322406 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.322475 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.332676 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5"] Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.467889 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.468188 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.468295 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6r5w\" (UniqueName: \"kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.468532 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.572507 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.572877 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6r5w\" (UniqueName: \"kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.572952 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.573124 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.576777 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.579992 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.580810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.592524 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6r5w\" (UniqueName: \"kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.651283 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:13 crc kubenswrapper[4881]: I0121 11:26:13.203112 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5"] Jan 21 11:26:13 crc kubenswrapper[4881]: I0121 11:26:13.235972 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" event={"ID":"5930ee4f-c104-4ac5-9440-2a24d110fae5","Type":"ContainerStarted","Data":"bde4706bcd913ba3323d2b1125ba1ee7475a762ce3f9d0c4ef8b30b43d404e6b"} Jan 21 11:26:14 crc kubenswrapper[4881]: I0121 11:26:14.247864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" event={"ID":"5930ee4f-c104-4ac5-9440-2a24d110fae5","Type":"ContainerStarted","Data":"670294433e01fe33af9fd85b65d810eef8d3617ee467e8afa32a1e27221cc5ca"} Jan 21 11:26:14 crc kubenswrapper[4881]: I0121 11:26:14.272207 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" podStartSLOduration=1.758139683 podStartE2EDuration="2.272185034s" podCreationTimestamp="2026-01-21 11:26:12 +0000 UTC" firstStartedPulling="2026-01-21 11:26:13.204032991 +0000 UTC m=+1760.463989460" lastFinishedPulling="2026-01-21 11:26:13.718078342 +0000 UTC m=+1760.978034811" observedRunningTime="2026-01-21 11:26:14.270465623 +0000 UTC m=+1761.530422092" watchObservedRunningTime="2026-01-21 11:26:14.272185034 +0000 UTC m=+1761.532141503" Jan 21 11:26:19 crc kubenswrapper[4881]: I0121 11:26:19.311632 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:26:19 crc kubenswrapper[4881]: E0121 11:26:19.312452 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:26:28 crc kubenswrapper[4881]: I0121 11:26:28.041082 4881 scope.go:117] "RemoveContainer" containerID="20252506bf2921633b620e12ae73d258d135c6a818c92bcf4d604ddbc1f5e46d" Jan 21 11:26:31 crc kubenswrapper[4881]: I0121 11:26:31.311751 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:26:31 crc kubenswrapper[4881]: E0121 11:26:31.312695 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:26:43 crc kubenswrapper[4881]: I0121 11:26:43.317964 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:26:43 crc kubenswrapper[4881]: E0121 11:26:43.318641 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:26:56 crc kubenswrapper[4881]: I0121 11:26:56.310847 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:26:56 crc kubenswrapper[4881]: E0121 11:26:56.311627 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:27:11 crc kubenswrapper[4881]: I0121 11:27:11.310938 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:27:11 crc kubenswrapper[4881]: E0121 11:27:11.312032 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:27:25 crc kubenswrapper[4881]: I0121 11:27:25.315820 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:27:25 crc kubenswrapper[4881]: E0121 11:27:25.319388 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:27:28 crc kubenswrapper[4881]: I0121 11:27:28.136411 4881 scope.go:117] "RemoveContainer" containerID="c7d5411076516ac1067feb6fa2326814efce9d04ded39d593fa3f53c461d73dc" Jan 21 11:27:28 crc kubenswrapper[4881]: I0121 11:27:28.164487 4881 scope.go:117] "RemoveContainer" containerID="243391ce37046a98efbd843bc1e6f28fda173bffe3ce05b733b63f613224e766" Jan 21 11:27:36 crc kubenswrapper[4881]: I0121 11:27:36.310904 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:27:36 crc kubenswrapper[4881]: E0121 11:27:36.311558 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:27:49 crc kubenswrapper[4881]: I0121 11:27:49.312841 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:27:49 crc kubenswrapper[4881]: E0121 11:27:49.313683 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:28:02 crc kubenswrapper[4881]: I0121 11:28:02.311500 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:28:02 crc kubenswrapper[4881]: E0121 11:28:02.312356 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:28:16 crc kubenswrapper[4881]: I0121 11:28:16.311367 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:28:16 crc kubenswrapper[4881]: E0121 11:28:16.312728 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.047354 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a34b-account-create-update-hm56c"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.059483 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b4bf-account-create-update-6p74j"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.076878 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-gc2qj"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.087648 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nv8vf"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.098947 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-smj4g"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.108602 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-8d4c-account-create-update-f29tp"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.118177 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-gc2qj"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.127476 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nv8vf"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.136023 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-8d4c-account-create-update-f29tp"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.147619 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b4bf-account-create-update-6p74j"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.157134 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a34b-account-create-update-hm56c"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.170491 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-smj4g"] Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.335285 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ea4f5c-fa1d-485c-80b3-a260d8725e81" path="/var/lib/kubelet/pods/13ea4f5c-fa1d-485c-80b3-a260d8725e81/volumes" Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.336413 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c4be317-c914-45c5-8da4-1fe7d647db7e" path="/var/lib/kubelet/pods/1c4be317-c914-45c5-8da4-1fe7d647db7e/volumes" Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.337467 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317bbc59-5154-4c0e-920a-3227d1ec4982" path="/var/lib/kubelet/pods/317bbc59-5154-4c0e-920a-3227d1ec4982/volumes" Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.338202 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="331fda3a-4e64-4824-abd7-42eaef7b9b4f" path="/var/lib/kubelet/pods/331fda3a-4e64-4824-abd7-42eaef7b9b4f/volumes" Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.339399 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" path="/var/lib/kubelet/pods/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e/volumes" Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.340330 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6a422f0-bb4b-442c-a2d7-96ac90ffde83" path="/var/lib/kubelet/pods/b6a422f0-bb4b-442c-a2d7-96ac90ffde83/volumes" Jan 21 11:28:27 crc kubenswrapper[4881]: I0121 11:28:27.310557 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:28:27 crc kubenswrapper[4881]: E0121 11:28:27.311309 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.231443 4881 scope.go:117] "RemoveContainer" containerID="08a0b7dafd2179b30f57680020c59d606fe75966918c8bb86686a6dacf5de9ff" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.275876 4881 scope.go:117] "RemoveContainer" containerID="d8dd72ec74cb8c65a23a4d5b59b35333d8b4f0429542fb48634decd408b21787" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.322256 4881 scope.go:117] "RemoveContainer" containerID="8b53d4f0258b883730ea2ab9cbc22ea1275e34223ca52f3ff089755ba0514b17" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.372221 4881 scope.go:117] "RemoveContainer" containerID="5dc89d3192dccc5bebeec553b9ca36f3b56735830fa2f8fae09494c5f8979443" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.422174 4881 scope.go:117] "RemoveContainer" containerID="8e69c6e6b0d6f76b9304a07ebd26d806a9e9908cc09c50913b96d416ca2b1454" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.495676 4881 scope.go:117] "RemoveContainer" containerID="9ae9aa24bb02508282163c868da5d6ab7a85e49192dbd35ecea2bbccdab0b150" Jan 21 11:28:32 crc kubenswrapper[4881]: I0121 11:28:32.033059 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-n9992"] Jan 21 11:28:32 crc kubenswrapper[4881]: I0121 11:28:32.044923 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-n9992"] Jan 21 11:28:33 crc kubenswrapper[4881]: I0121 11:28:33.333685 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a2b37a-049a-45a1-aeb5-6b7d5515dd69" path="/var/lib/kubelet/pods/70a2b37a-049a-45a1-aeb5-6b7d5515dd69/volumes" Jan 21 11:28:38 crc kubenswrapper[4881]: I0121 11:28:38.311218 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:28:38 crc kubenswrapper[4881]: E0121 11:28:38.312525 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:28:52 crc kubenswrapper[4881]: I0121 11:28:52.311524 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:28:52 crc kubenswrapper[4881]: E0121 11:28:52.312478 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:29:04 crc kubenswrapper[4881]: I0121 11:29:04.311649 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:29:04 crc kubenswrapper[4881]: E0121 11:29:04.312605 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.206242 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-r9r4z"] Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.223233 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-170f-account-create-update-8bt4l"] Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.232722 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-b544m"] Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.241848 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-r9r4z"] Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.251024 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-b544m"] Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.270415 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-170f-account-create-update-8bt4l"] Jan 21 11:29:11 crc kubenswrapper[4881]: I0121 11:29:11.325888 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760e8dbf-d827-42ef-969c-1c7409f7ac20" path="/var/lib/kubelet/pods/760e8dbf-d827-42ef-969c-1c7409f7ac20/volumes" Jan 21 11:29:11 crc kubenswrapper[4881]: I0121 11:29:11.327654 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c837cab9-43a5-4b84-a0bd-d915bca31600" path="/var/lib/kubelet/pods/c837cab9-43a5-4b84-a0bd-d915bca31600/volumes" Jan 21 11:29:11 crc kubenswrapper[4881]: I0121 11:29:11.328768 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8cfe009-eba2-4713-b50f-cc334b4ca691" path="/var/lib/kubelet/pods/c8cfe009-eba2-4713-b50f-cc334b4ca691/volumes" Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.045997 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a5aa-account-create-update-j2nc8"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.057219 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3649-account-create-update-pqj5m"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.067676 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3649-account-create-update-pqj5m"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.093376 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ktp2w"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.102860 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-82x9l"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.112073 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c7b7-account-create-update-dcz9r"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.120311 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a5aa-account-create-update-j2nc8"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.129650 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ktp2w"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.137377 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-82x9l"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.145194 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c7b7-account-create-update-dcz9r"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.310925 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:29:16 crc kubenswrapper[4881]: E0121 11:29:16.311287 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:29:17 crc kubenswrapper[4881]: I0121 11:29:17.328513 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0145b8f9-5452-4f0e-819c-61fbb8badffb" path="/var/lib/kubelet/pods/0145b8f9-5452-4f0e-819c-61fbb8badffb/volumes" Jan 21 11:29:17 crc kubenswrapper[4881]: I0121 11:29:17.329975 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d72ab14-b1c2-4382-847a-00eb254ac958" path="/var/lib/kubelet/pods/5d72ab14-b1c2-4382-847a-00eb254ac958/volumes" Jan 21 11:29:17 crc kubenswrapper[4881]: I0121 11:29:17.332135 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" path="/var/lib/kubelet/pods/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4/volumes" Jan 21 11:29:17 crc kubenswrapper[4881]: I0121 11:29:17.333112 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b2b4e9-304c-47ae-939a-9d938d012b90" path="/var/lib/kubelet/pods/b4b2b4e9-304c-47ae-939a-9d938d012b90/volumes" Jan 21 11:29:17 crc kubenswrapper[4881]: I0121 11:29:17.335494 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3ba10e-2cbd-4350-9014-27a92932849f" path="/var/lib/kubelet/pods/ec3ba10e-2cbd-4350-9014-27a92932849f/volumes" Jan 21 11:29:28 crc kubenswrapper[4881]: I0121 11:29:28.647901 4881 scope.go:117] "RemoveContainer" containerID="000840a5458dc374424237a1e0edaa7bc61f3e5c2c1a3524dfdcefbcaa258c53" Jan 21 11:29:28 crc kubenswrapper[4881]: I0121 11:29:28.674447 4881 scope.go:117] "RemoveContainer" containerID="9183c1ea9a3472251b9a9872ac196a0371d8a3a960cf0876e3244bf2dc5fc313" Jan 21 11:29:28 crc kubenswrapper[4881]: I0121 11:29:28.698500 4881 scope.go:117] "RemoveContainer" containerID="19837216e672b1d70dcee3db6a9cc2dfe6a6a6ac2f0ef6c6a1c9729e5d023d0f" Jan 21 11:29:28 crc kubenswrapper[4881]: I0121 11:29:28.766757 4881 scope.go:117] "RemoveContainer" containerID="0287622c020081ba9c95095872909db810663fe9347d92c3e84d5f5ddca8090f" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.116391 4881 scope.go:117] "RemoveContainer" containerID="9d3665845c2c2c09903d0aa16a7538de5b4dcf05cef7d82865d9c9d446cdaf41" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.141477 4881 scope.go:117] "RemoveContainer" containerID="842c407700548966028d06c2f685224af9199aeb260a3fcbe49b13c5d2308449" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.178563 4881 scope.go:117] "RemoveContainer" containerID="8fede96a0f0891ea2a0beeea55c81b92d1d136a372295efbbbb9fb60c32a400b" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.212707 4881 scope.go:117] "RemoveContainer" containerID="475d11a1d0ffe3143569c01c096587097abd1f5b648c8d0d1064b5b35157b3c4" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.255696 4881 scope.go:117] "RemoveContainer" containerID="23d18cc60c7d47249b61d06b5e22cae5297e1e798a824f42c26b13569f6185c2" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.278585 4881 scope.go:117] "RemoveContainer" containerID="68b28d1f90d946399d23686118aca2c39b038f12760a90f94c3980be0fdb6b45" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.304940 4881 scope.go:117] "RemoveContainer" containerID="d5d6be9da18cdb336cad44c85f030f31c3a241f6234a1b668281031e8ffb56ec" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.310593 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:29:29 crc kubenswrapper[4881]: E0121 11:29:29.310894 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.336025 4881 scope.go:117] "RemoveContainer" containerID="4830c420695532fe361ac3eb65ee53d659da36dd7a4d7c07a18532e51115b820" Jan 21 11:29:43 crc kubenswrapper[4881]: I0121 11:29:43.320719 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:29:43 crc kubenswrapper[4881]: I0121 11:29:43.592877 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274"} Jan 21 11:29:44 crc kubenswrapper[4881]: I0121 11:29:44.070797 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-44pdb"] Jan 21 11:29:44 crc kubenswrapper[4881]: I0121 11:29:44.083684 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-44pdb"] Jan 21 11:29:45 crc kubenswrapper[4881]: I0121 11:29:45.324776 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34efcb76-01fb-490b-88c0-a4ee1363a01e" path="/var/lib/kubelet/pods/34efcb76-01fb-490b-88c0-a4ee1363a01e/volumes" Jan 21 11:29:58 crc kubenswrapper[4881]: I0121 11:29:58.032428 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-t4mx7"] Jan 21 11:29:58 crc kubenswrapper[4881]: I0121 11:29:58.042610 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-t4mx7"] Jan 21 11:29:59 crc kubenswrapper[4881]: I0121 11:29:59.075256 4881 generic.go:334] "Generic (PLEG): container finished" podID="5930ee4f-c104-4ac5-9440-2a24d110fae5" containerID="670294433e01fe33af9fd85b65d810eef8d3617ee467e8afa32a1e27221cc5ca" exitCode=0 Jan 21 11:29:59 crc kubenswrapper[4881]: I0121 11:29:59.075333 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" event={"ID":"5930ee4f-c104-4ac5-9440-2a24d110fae5","Type":"ContainerDied","Data":"670294433e01fe33af9fd85b65d810eef8d3617ee467e8afa32a1e27221cc5ca"} Jan 21 11:29:59 crc kubenswrapper[4881]: I0121 11:29:59.323442 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" path="/var/lib/kubelet/pods/bc7e598c-b449-4e8c-9214-44e27cb45e53/volumes" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.191366 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k"] Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.385573 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.388767 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.390118 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.411279 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k"] Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.485744 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pftj\" (UniqueName: \"kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.485955 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.486090 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.587432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.587607 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pftj\" (UniqueName: \"kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.587696 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.588610 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.595175 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.606150 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pftj\" (UniqueName: \"kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.720964 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.877369 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.896690 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam\") pod \"5930ee4f-c104-4ac5-9440-2a24d110fae5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.896812 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6r5w\" (UniqueName: \"kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w\") pod \"5930ee4f-c104-4ac5-9440-2a24d110fae5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.896886 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory\") pod \"5930ee4f-c104-4ac5-9440-2a24d110fae5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.896909 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle\") pod \"5930ee4f-c104-4ac5-9440-2a24d110fae5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.905704 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w" (OuterVolumeSpecName: "kube-api-access-q6r5w") pod "5930ee4f-c104-4ac5-9440-2a24d110fae5" (UID: "5930ee4f-c104-4ac5-9440-2a24d110fae5"). InnerVolumeSpecName "kube-api-access-q6r5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.918965 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5930ee4f-c104-4ac5-9440-2a24d110fae5" (UID: "5930ee4f-c104-4ac5-9440-2a24d110fae5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.928996 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory" (OuterVolumeSpecName: "inventory") pod "5930ee4f-c104-4ac5-9440-2a24d110fae5" (UID: "5930ee4f-c104-4ac5-9440-2a24d110fae5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.932522 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5930ee4f-c104-4ac5-9440-2a24d110fae5" (UID: "5930ee4f-c104-4ac5-9440-2a24d110fae5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.006846 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.006897 4881 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.006915 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.006934 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6r5w\" (UniqueName: \"kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.096621 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" event={"ID":"5930ee4f-c104-4ac5-9440-2a24d110fae5","Type":"ContainerDied","Data":"bde4706bcd913ba3323d2b1125ba1ee7475a762ce3f9d0c4ef8b30b43d404e6b"} Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.096670 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bde4706bcd913ba3323d2b1125ba1ee7475a762ce3f9d0c4ef8b30b43d404e6b" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.096741 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.195655 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt"] Jan 21 11:30:01 crc kubenswrapper[4881]: E0121 11:30:01.196294 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5930ee4f-c104-4ac5-9440-2a24d110fae5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.196312 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5930ee4f-c104-4ac5-9440-2a24d110fae5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.196546 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5930ee4f-c104-4ac5-9440-2a24d110fae5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.197321 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.200004 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.200142 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.200273 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.200328 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.211519 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.211813 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.212017 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79mkd\" (UniqueName: \"kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.223899 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt"] Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.247051 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k"] Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.313666 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.313751 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.313831 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79mkd\" (UniqueName: \"kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.319143 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.319172 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.346101 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79mkd\" (UniqueName: \"kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.520144 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:02 crc kubenswrapper[4881]: I0121 11:30:02.234495 4881 generic.go:334] "Generic (PLEG): container finished" podID="0563880c-563e-4cc5-93a0-c2af095788cb" containerID="c97b0fba984ac7ac90aa9867ceabf4a4b1015c378fef6bf95655dcf59a8cdfd7" exitCode=0 Jan 21 11:30:02 crc kubenswrapper[4881]: I0121 11:30:02.234689 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" event={"ID":"0563880c-563e-4cc5-93a0-c2af095788cb","Type":"ContainerDied","Data":"c97b0fba984ac7ac90aa9867ceabf4a4b1015c378fef6bf95655dcf59a8cdfd7"} Jan 21 11:30:02 crc kubenswrapper[4881]: I0121 11:30:02.234715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" event={"ID":"0563880c-563e-4cc5-93a0-c2af095788cb","Type":"ContainerStarted","Data":"34da932f062cb57a55dc0f56949e474ce9f5cdd3084f9df91d17f54517eed521"} Jan 21 11:30:02 crc kubenswrapper[4881]: I0121 11:30:02.384111 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt"] Jan 21 11:30:02 crc kubenswrapper[4881]: W0121 11:30:02.385527 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f76bc7_59dc_4fd0_8ca8_90ce72cb6f45.slice/crio-9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46 WatchSource:0}: Error finding container 9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46: Status 404 returned error can't find the container with id 9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46 Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.251145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" event={"ID":"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45","Type":"ContainerStarted","Data":"d7065389e2ebfdcbfd63692c15d886f13375179640678ddba4e24b11c5c250dd"} Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.251873 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" event={"ID":"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45","Type":"ContainerStarted","Data":"9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46"} Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.815110 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.838653 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" podStartSLOduration=2.255792113 podStartE2EDuration="2.83863026s" podCreationTimestamp="2026-01-21 11:30:01 +0000 UTC" firstStartedPulling="2026-01-21 11:30:02.389837165 +0000 UTC m=+1989.649793634" lastFinishedPulling="2026-01-21 11:30:02.972675322 +0000 UTC m=+1990.232631781" observedRunningTime="2026-01-21 11:30:03.273349786 +0000 UTC m=+1990.533306265" watchObservedRunningTime="2026-01-21 11:30:03.83863026 +0000 UTC m=+1991.098586729" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.876294 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pftj\" (UniqueName: \"kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj\") pod \"0563880c-563e-4cc5-93a0-c2af095788cb\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.876437 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume\") pod \"0563880c-563e-4cc5-93a0-c2af095788cb\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.876511 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume\") pod \"0563880c-563e-4cc5-93a0-c2af095788cb\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.877443 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "0563880c-563e-4cc5-93a0-c2af095788cb" (UID: "0563880c-563e-4cc5-93a0-c2af095788cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.884054 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0563880c-563e-4cc5-93a0-c2af095788cb" (UID: "0563880c-563e-4cc5-93a0-c2af095788cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.884082 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj" (OuterVolumeSpecName: "kube-api-access-6pftj") pod "0563880c-563e-4cc5-93a0-c2af095788cb" (UID: "0563880c-563e-4cc5-93a0-c2af095788cb"). InnerVolumeSpecName "kube-api-access-6pftj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.978377 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.978425 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.978442 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pftj\" (UniqueName: \"kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:04 crc kubenswrapper[4881]: I0121 11:30:04.262380 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" event={"ID":"0563880c-563e-4cc5-93a0-c2af095788cb","Type":"ContainerDied","Data":"34da932f062cb57a55dc0f56949e474ce9f5cdd3084f9df91d17f54517eed521"} Jan 21 11:30:04 crc kubenswrapper[4881]: I0121 11:30:04.262436 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34da932f062cb57a55dc0f56949e474ce9f5cdd3084f9df91d17f54517eed521" Jan 21 11:30:04 crc kubenswrapper[4881]: I0121 11:30:04.262408 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:04 crc kubenswrapper[4881]: I0121 11:30:04.893115 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk"] Jan 21 11:30:04 crc kubenswrapper[4881]: I0121 11:30:04.902422 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk"] Jan 21 11:30:05 crc kubenswrapper[4881]: I0121 11:30:05.325821 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303bdbe6-3bb4-4ace-86b1-f489c795580f" path="/var/lib/kubelet/pods/303bdbe6-3bb4-4ace-86b1-f489c795580f/volumes" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.287385 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:14 crc kubenswrapper[4881]: E0121 11:30:14.288536 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0563880c-563e-4cc5-93a0-c2af095788cb" containerName="collect-profiles" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.288556 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0563880c-563e-4cc5-93a0-c2af095788cb" containerName="collect-profiles" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.288838 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0563880c-563e-4cc5-93a0-c2af095788cb" containerName="collect-profiles" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.290457 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.335834 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.422857 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b968s\" (UniqueName: \"kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.423345 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.423388 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.527262 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.527646 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.527897 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b968s\" (UniqueName: \"kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.527932 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.528156 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.551315 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b968s\" (UniqueName: \"kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.659293 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:15 crc kubenswrapper[4881]: I0121 11:30:15.142235 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:15 crc kubenswrapper[4881]: W0121 11:30:15.148631 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f7bf98e_335f_406f_8ef8_069f86093c55.slice/crio-f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc WatchSource:0}: Error finding container f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc: Status 404 returned error can't find the container with id f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc Jan 21 11:30:15 crc kubenswrapper[4881]: I0121 11:30:15.380454 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerStarted","Data":"48d5d26b6c9086a6b947d5294b328f1c7e8f26fa1ce1593b0120714fc18e44b1"} Jan 21 11:30:15 crc kubenswrapper[4881]: I0121 11:30:15.380508 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerStarted","Data":"f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc"} Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.098453 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.106115 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.112123 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.268979 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65rfw\" (UniqueName: \"kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.269077 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.269135 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.371335 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.371403 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.371643 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65rfw\" (UniqueName: \"kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.372164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.373242 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.395769 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65rfw\" (UniqueName: \"kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.397334 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerID="48d5d26b6c9086a6b947d5294b328f1c7e8f26fa1ce1593b0120714fc18e44b1" exitCode=0 Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.397377 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerDied","Data":"48d5d26b6c9086a6b947d5294b328f1c7e8f26fa1ce1593b0120714fc18e44b1"} Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.453676 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.957216 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:17 crc kubenswrapper[4881]: I0121 11:30:17.417320 4881 generic.go:334] "Generic (PLEG): container finished" podID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerID="d6ee22258af69df6704251a1ea48a067b0aad9b9017145fdec7581e1437ace89" exitCode=0 Jan 21 11:30:17 crc kubenswrapper[4881]: I0121 11:30:17.417416 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerDied","Data":"d6ee22258af69df6704251a1ea48a067b0aad9b9017145fdec7581e1437ace89"} Jan 21 11:30:17 crc kubenswrapper[4881]: I0121 11:30:17.417492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerStarted","Data":"1f7f3ae2471976e97c8ea641c9792ee7bc57f8b6be98d0f78836de61e158f4a0"} Jan 21 11:30:17 crc kubenswrapper[4881]: I0121 11:30:17.423959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerStarted","Data":"c1eba3ae03b1d6805b90d42d0ec2f798fa4704781a61dbdfa8159f414d7bb80e"} Jan 21 11:30:18 crc kubenswrapper[4881]: I0121 11:30:18.438419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerStarted","Data":"c222168e828ddf8dc31adf5d20e6251d1aebd2db36a121297ee44763be9bc74e"} Jan 21 11:30:18 crc kubenswrapper[4881]: I0121 11:30:18.441389 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerID="c1eba3ae03b1d6805b90d42d0ec2f798fa4704781a61dbdfa8159f414d7bb80e" exitCode=0 Jan 21 11:30:18 crc kubenswrapper[4881]: I0121 11:30:18.441433 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerDied","Data":"c1eba3ae03b1d6805b90d42d0ec2f798fa4704781a61dbdfa8159f414d7bb80e"} Jan 21 11:30:22 crc kubenswrapper[4881]: I0121 11:30:22.477326 4881 generic.go:334] "Generic (PLEG): container finished" podID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerID="c222168e828ddf8dc31adf5d20e6251d1aebd2db36a121297ee44763be9bc74e" exitCode=0 Jan 21 11:30:22 crc kubenswrapper[4881]: I0121 11:30:22.477403 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerDied","Data":"c222168e828ddf8dc31adf5d20e6251d1aebd2db36a121297ee44763be9bc74e"} Jan 21 11:30:23 crc kubenswrapper[4881]: I0121 11:30:23.490132 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerStarted","Data":"5e0abf8ffd3df2b4543f3b78f4df1de894199c4c001e6db2e5a3872e46d7a54b"} Jan 21 11:30:23 crc kubenswrapper[4881]: I0121 11:30:23.493270 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerStarted","Data":"0ab0a82d406b0a4031e5637f72af69a714ded06513932b035aeb5ac564f21b6b"} Jan 21 11:30:23 crc kubenswrapper[4881]: I0121 11:30:23.513915 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cgr87" podStartSLOduration=2.065297799 podStartE2EDuration="7.513899556s" podCreationTimestamp="2026-01-21 11:30:16 +0000 UTC" firstStartedPulling="2026-01-21 11:30:17.419847487 +0000 UTC m=+2004.679803956" lastFinishedPulling="2026-01-21 11:30:22.868449244 +0000 UTC m=+2010.128405713" observedRunningTime="2026-01-21 11:30:23.510364 +0000 UTC m=+2010.770320469" watchObservedRunningTime="2026-01-21 11:30:23.513899556 +0000 UTC m=+2010.773856025" Jan 21 11:30:23 crc kubenswrapper[4881]: I0121 11:30:23.541570 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5mmz" podStartSLOduration=6.379220021 podStartE2EDuration="9.54154931s" podCreationTimestamp="2026-01-21 11:30:14 +0000 UTC" firstStartedPulling="2026-01-21 11:30:16.399591625 +0000 UTC m=+2003.659548094" lastFinishedPulling="2026-01-21 11:30:19.561920914 +0000 UTC m=+2006.821877383" observedRunningTime="2026-01-21 11:30:23.53458797 +0000 UTC m=+2010.794544439" watchObservedRunningTime="2026-01-21 11:30:23.54154931 +0000 UTC m=+2010.801505779" Jan 21 11:30:24 crc kubenswrapper[4881]: I0121 11:30:24.660428 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:24 crc kubenswrapper[4881]: I0121 11:30:24.660767 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:25 crc kubenswrapper[4881]: I0121 11:30:25.702530 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-w5mmz" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="registry-server" probeResult="failure" output=< Jan 21 11:30:25 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:30:25 crc kubenswrapper[4881]: > Jan 21 11:30:26 crc kubenswrapper[4881]: I0121 11:30:26.719073 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:26 crc kubenswrapper[4881]: I0121 11:30:26.719121 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:27 crc kubenswrapper[4881]: I0121 11:30:27.778178 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cgr87" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="registry-server" probeResult="failure" output=< Jan 21 11:30:27 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:30:27 crc kubenswrapper[4881]: > Jan 21 11:30:29 crc kubenswrapper[4881]: I0121 11:30:29.550590 4881 scope.go:117] "RemoveContainer" containerID="498906e9fbb3b564603759f2238f54ad3d7c8a3ccff8535f1f6031fd2e192fd4" Jan 21 11:30:29 crc kubenswrapper[4881]: I0121 11:30:29.599753 4881 scope.go:117] "RemoveContainer" containerID="7b3d565271b021e09dee5880082bea3cf44364df7d0a06382823cae7b26b1046" Jan 21 11:30:29 crc kubenswrapper[4881]: I0121 11:30:29.635082 4881 scope.go:117] "RemoveContainer" containerID="2f6a1a1e4268540ee682b58127eb41126b116ba4e30186b584ee325d0961ebec" Jan 21 11:30:29 crc kubenswrapper[4881]: I0121 11:30:29.697513 4881 scope.go:117] "RemoveContainer" containerID="b4ed75bebc3e4f7b35b331a2f216bede613a9086f548aa45e96cbef5724a690a" Jan 21 11:30:29 crc kubenswrapper[4881]: I0121 11:30:29.750424 4881 scope.go:117] "RemoveContainer" containerID="a807273d95c9864f3ecabade018dc0a91eb28a83bcfcbef9786d9473502a12a5" Jan 21 11:30:34 crc kubenswrapper[4881]: I0121 11:30:34.720439 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:34 crc kubenswrapper[4881]: I0121 11:30:34.775642 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:34 crc kubenswrapper[4881]: I0121 11:30:34.961836 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:35 crc kubenswrapper[4881]: I0121 11:30:35.993015 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w5mmz" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="registry-server" containerID="cri-o://0ab0a82d406b0a4031e5637f72af69a714ded06513932b035aeb5ac564f21b6b" gracePeriod=2 Jan 21 11:30:36 crc kubenswrapper[4881]: I0121 11:30:36.512704 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:36 crc kubenswrapper[4881]: I0121 11:30:36.558696 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:37 crc kubenswrapper[4881]: I0121 11:30:37.362965 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:38 crc kubenswrapper[4881]: I0121 11:30:38.016527 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerID="0ab0a82d406b0a4031e5637f72af69a714ded06513932b035aeb5ac564f21b6b" exitCode=0 Jan 21 11:30:38 crc kubenswrapper[4881]: I0121 11:30:38.016598 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerDied","Data":"0ab0a82d406b0a4031e5637f72af69a714ded06513932b035aeb5ac564f21b6b"} Jan 21 11:30:38 crc kubenswrapper[4881]: I0121 11:30:38.017104 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cgr87" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="registry-server" containerID="cri-o://5e0abf8ffd3df2b4543f3b78f4df1de894199c4c001e6db2e5a3872e46d7a54b" gracePeriod=2 Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.033192 4881 generic.go:334] "Generic (PLEG): container finished" podID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerID="5e0abf8ffd3df2b4543f3b78f4df1de894199c4c001e6db2e5a3872e46d7a54b" exitCode=0 Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.033273 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerDied","Data":"5e0abf8ffd3df2b4543f3b78f4df1de894199c4c001e6db2e5a3872e46d7a54b"} Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.033343 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerDied","Data":"1f7f3ae2471976e97c8ea641c9792ee7bc57f8b6be98d0f78836de61e158f4a0"} Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.033360 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f7f3ae2471976e97c8ea641c9792ee7bc57f8b6be98d0f78836de61e158f4a0" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.035812 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerDied","Data":"f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc"} Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.035868 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.072239 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.085535 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.273849 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities\") pod \"e28b5533-edc8-47ef-8ba6-23368631d10d\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.273991 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b968s\" (UniqueName: \"kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s\") pod \"2f7bf98e-335f-406f-8ef8-069f86093c55\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.274728 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities" (OuterVolumeSpecName: "utilities") pod "e28b5533-edc8-47ef-8ba6-23368631d10d" (UID: "e28b5533-edc8-47ef-8ba6-23368631d10d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.274812 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65rfw\" (UniqueName: \"kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw\") pod \"e28b5533-edc8-47ef-8ba6-23368631d10d\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.274866 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content\") pod \"e28b5533-edc8-47ef-8ba6-23368631d10d\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.274958 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities\") pod \"2f7bf98e-335f-406f-8ef8-069f86093c55\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.275062 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content\") pod \"2f7bf98e-335f-406f-8ef8-069f86093c55\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.275521 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.275705 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities" (OuterVolumeSpecName: "utilities") pod "2f7bf98e-335f-406f-8ef8-069f86093c55" (UID: "2f7bf98e-335f-406f-8ef8-069f86093c55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.280339 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s" (OuterVolumeSpecName: "kube-api-access-b968s") pod "2f7bf98e-335f-406f-8ef8-069f86093c55" (UID: "2f7bf98e-335f-406f-8ef8-069f86093c55"). InnerVolumeSpecName "kube-api-access-b968s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.288100 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw" (OuterVolumeSpecName: "kube-api-access-65rfw") pod "e28b5533-edc8-47ef-8ba6-23368631d10d" (UID: "e28b5533-edc8-47ef-8ba6-23368631d10d"). InnerVolumeSpecName "kube-api-access-65rfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.304468 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f7bf98e-335f-406f-8ef8-069f86093c55" (UID: "2f7bf98e-335f-406f-8ef8-069f86093c55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.377103 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.377141 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.377151 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b968s\" (UniqueName: \"kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.377160 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65rfw\" (UniqueName: \"kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.394549 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e28b5533-edc8-47ef-8ba6-23368631d10d" (UID: "e28b5533-edc8-47ef-8ba6-23368631d10d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.479696 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.045985 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.046022 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.085477 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.115810 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.124627 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.133304 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:41 crc kubenswrapper[4881]: I0121 11:30:41.328266 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" path="/var/lib/kubelet/pods/2f7bf98e-335f-406f-8ef8-069f86093c55/volumes" Jan 21 11:30:41 crc kubenswrapper[4881]: I0121 11:30:41.329245 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" path="/var/lib/kubelet/pods/e28b5533-edc8-47ef-8ba6-23368631d10d/volumes" Jan 21 11:30:46 crc kubenswrapper[4881]: I0121 11:30:46.047176 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-kc9jz"] Jan 21 11:30:46 crc kubenswrapper[4881]: I0121 11:30:46.058972 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mzhtm"] Jan 21 11:30:46 crc kubenswrapper[4881]: I0121 11:30:46.069275 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mzhtm"] Jan 21 11:30:46 crc kubenswrapper[4881]: I0121 11:30:46.079484 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-kc9jz"] Jan 21 11:30:47 crc kubenswrapper[4881]: I0121 11:30:47.687291 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f9442b-24ee-47d4-b914-19d32a5cad74" path="/var/lib/kubelet/pods/33f9442b-24ee-47d4-b914-19d32a5cad74/volumes" Jan 21 11:30:47 crc kubenswrapper[4881]: I0121 11:30:47.689887 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" path="/var/lib/kubelet/pods/f568ffda-82a9-4f47-89d3-13b89a35c9b4/volumes" Jan 21 11:30:50 crc kubenswrapper[4881]: I0121 11:30:50.029211 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-t6mz2"] Jan 21 11:30:50 crc kubenswrapper[4881]: I0121 11:30:50.037505 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-t6mz2"] Jan 21 11:30:51 crc kubenswrapper[4881]: I0121 11:30:51.325908 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869a596b-159c-4185-a4ab-0e36c5d130fc" path="/var/lib/kubelet/pods/869a596b-159c-4185-a4ab-0e36c5d130fc/volumes" Jan 21 11:31:00 crc kubenswrapper[4881]: I0121 11:31:00.043944 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-slhtz"] Jan 21 11:31:00 crc kubenswrapper[4881]: I0121 11:31:00.054007 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-slhtz"] Jan 21 11:31:01 crc kubenswrapper[4881]: I0121 11:31:01.321209 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" path="/var/lib/kubelet/pods/4bf52889-d5f3-44f8-b657-8ff3790962d1/volumes" Jan 21 11:31:07 crc kubenswrapper[4881]: I0121 11:31:07.054040 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4wxvl"] Jan 21 11:31:07 crc kubenswrapper[4881]: I0121 11:31:07.090190 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4wxvl"] Jan 21 11:31:07 crc kubenswrapper[4881]: I0121 11:31:07.322194 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" path="/var/lib/kubelet/pods/65250dcf-0f0f-4fa6-8d57-e07d3d29f290/volumes" Jan 21 11:31:24 crc kubenswrapper[4881]: I0121 11:31:24.042396 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mxb97"] Jan 21 11:31:24 crc kubenswrapper[4881]: I0121 11:31:24.053664 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mxb97"] Jan 21 11:31:25 crc kubenswrapper[4881]: I0121 11:31:25.325515 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" path="/var/lib/kubelet/pods/349e8898-8b7c-414a-8357-d431c8b81bf4/volumes" Jan 21 11:31:29 crc kubenswrapper[4881]: I0121 11:31:29.841419 4881 scope.go:117] "RemoveContainer" containerID="6641f95a17dea3fe9aff6d4faf3bd17425257c19253868f2b83b7d7d759a48fd" Jan 21 11:31:29 crc kubenswrapper[4881]: I0121 11:31:29.910680 4881 scope.go:117] "RemoveContainer" containerID="c648692c811ad6f54f474e55240cf83d10bccce020989330faa953f52c62836c" Jan 21 11:31:30 crc kubenswrapper[4881]: I0121 11:31:30.002983 4881 scope.go:117] "RemoveContainer" containerID="60c7ee63bf67b35a7137c545eb5e36b0ba7f24fe96f583c9314a3bcf2ea933c6" Jan 21 11:31:30 crc kubenswrapper[4881]: I0121 11:31:30.047668 4881 scope.go:117] "RemoveContainer" containerID="3a796b1b54b7432132400a5a214afb4cf61aaada5f5054cc747d5e74194d9dae" Jan 21 11:31:30 crc kubenswrapper[4881]: I0121 11:31:30.106041 4881 scope.go:117] "RemoveContainer" containerID="b750c2c4c79eaa65d01394c5ce39a3b9970863a1b04d7248173d08889a7ae0be" Jan 21 11:31:30 crc kubenswrapper[4881]: I0121 11:31:30.152221 4881 scope.go:117] "RemoveContainer" containerID="e31e701604fd33a6bb82c0b6900e3f3bdeaa0b71abb7488fd4edd2c71ed37a56" Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.062111 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jdk2x"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.071655 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-b85xv"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.080233 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b4dc-account-create-update-46bk2"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.090179 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jdk2x"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.100586 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b4dc-account-create-update-46bk2"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.109024 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-b85xv"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.327618 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a601b0e-b326-4e55-901e-08a32fe24005" path="/var/lib/kubelet/pods/2a601b0e-b326-4e55-901e-08a32fe24005/volumes" Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.328808 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d8a04fd-1a86-454f-bd69-64ad270b8357" path="/var/lib/kubelet/pods/4d8a04fd-1a86-454f-bd69-64ad270b8357/volumes" Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.330097 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502efce3-0d16-491d-b6fa-1b1d98f76d4b" path="/var/lib/kubelet/pods/502efce3-0d16-491d-b6fa-1b1d98f76d4b/volumes" Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.851306 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.851684 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.032529 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fb46-account-create-update-xxwmq"] Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.042126 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-f99bl"] Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.052226 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5627-account-create-update-mbnwf"] Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.066218 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5627-account-create-update-mbnwf"] Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.082237 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-fb46-account-create-update-xxwmq"] Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.104018 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-f99bl"] Jan 21 11:32:01 crc kubenswrapper[4881]: I0121 11:32:01.325395 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29487dae-24e9-4d5b-9819-99516df78630" path="/var/lib/kubelet/pods/29487dae-24e9-4d5b-9819-99516df78630/volumes" Jan 21 11:32:01 crc kubenswrapper[4881]: I0121 11:32:01.327398 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de50b4a3-643f-4e4a-9853-b794eae5c08c" path="/var/lib/kubelet/pods/de50b4a3-643f-4e4a-9853-b794eae5c08c/volumes" Jan 21 11:32:01 crc kubenswrapper[4881]: I0121 11:32:01.329113 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2c35a47-0e6e-4760-9026-617ca187b066" path="/var/lib/kubelet/pods/f2c35a47-0e6e-4760-9026-617ca187b066/volumes" Jan 21 11:32:29 crc kubenswrapper[4881]: I0121 11:32:29.850912 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:32:29 crc kubenswrapper[4881]: I0121 11:32:29.851958 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.316398 4881 scope.go:117] "RemoveContainer" containerID="22038197b765a72901f7e4d04d0bebb17e8d3bca09464adc6dc75e99375c24ab" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.354545 4881 scope.go:117] "RemoveContainer" containerID="e072378bb8b79adf91d2701f6ed4a0743a1956ccf92868309d50c74d1a40ff46" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.449429 4881 scope.go:117] "RemoveContainer" containerID="5d3f34869256c4d21e6b17d94ceaa6baf87aefe4c608982c7e1561bfc3b81de2" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.494561 4881 scope.go:117] "RemoveContainer" containerID="dccd9ebbabd2787629df88e189e045b4233f9efdaa17a33f088ad8c951d3530a" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.540836 4881 scope.go:117] "RemoveContainer" containerID="27659f5aab69bf4af66ab9aeb1d61a07fd49c77e8daa35d08cb33096b28e9074" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.618175 4881 scope.go:117] "RemoveContainer" containerID="3e8735972d4959fbfdcc07dada19674d2a9110125d71fdfe160979bcc5be0481" Jan 21 11:32:32 crc kubenswrapper[4881]: I0121 11:32:32.272842 4881 generic.go:334] "Generic (PLEG): container finished" podID="01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" containerID="d7065389e2ebfdcbfd63692c15d886f13375179640678ddba4e24b11c5c250dd" exitCode=0 Jan 21 11:32:32 crc kubenswrapper[4881]: I0121 11:32:32.272928 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" event={"ID":"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45","Type":"ContainerDied","Data":"d7065389e2ebfdcbfd63692c15d886f13375179640678ddba4e24b11c5c250dd"} Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.805341 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.863260 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79mkd\" (UniqueName: \"kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd\") pod \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.863658 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam\") pod \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.863829 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory\") pod \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.870873 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd" (OuterVolumeSpecName: "kube-api-access-79mkd") pod "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" (UID: "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45"). InnerVolumeSpecName "kube-api-access-79mkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.896050 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" (UID: "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.912250 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory" (OuterVolumeSpecName: "inventory") pod "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" (UID: "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.966539 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.966591 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79mkd\" (UniqueName: \"kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd\") on node \"crc\" DevicePath \"\"" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.966607 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.291867 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" event={"ID":"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45","Type":"ContainerDied","Data":"9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46"} Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.291913 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.291915 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.403570 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6"] Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404216 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404237 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404254 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404281 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404295 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="extract-utilities" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404301 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="extract-utilities" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404320 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404326 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404365 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="extract-utilities" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404372 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="extract-utilities" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404383 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="extract-content" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404390 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="extract-content" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404411 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="extract-content" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404434 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="extract-content" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404675 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404700 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404717 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.406226 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.411400 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.411686 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.412087 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.412270 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.413762 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6"] Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.482415 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.482815 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwwm9\" (UniqueName: \"kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.482944 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.586411 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.586461 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwwm9\" (UniqueName: \"kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.586494 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.591177 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.596892 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.603744 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwwm9\" (UniqueName: \"kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.727516 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:35 crc kubenswrapper[4881]: I0121 11:32:35.298958 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6"] Jan 21 11:32:35 crc kubenswrapper[4881]: I0121 11:32:35.305811 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:32:36 crc kubenswrapper[4881]: I0121 11:32:36.316355 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" event={"ID":"24a093f9-cd67-48f9-a18b-48d1a79a8aa0","Type":"ContainerStarted","Data":"ed91e50a3880cb037a332efeeea663c905f6d34b8520e7608505f8f61898c93d"} Jan 21 11:32:36 crc kubenswrapper[4881]: I0121 11:32:36.316698 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" event={"ID":"24a093f9-cd67-48f9-a18b-48d1a79a8aa0","Type":"ContainerStarted","Data":"7a8fa3b39b588ac3bed4bee992d7ff3c312e5258aac1318986c1e1881a279a1c"} Jan 21 11:32:36 crc kubenswrapper[4881]: I0121 11:32:36.347980 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" podStartSLOduration=1.850976952 podStartE2EDuration="2.3479595s" podCreationTimestamp="2026-01-21 11:32:34 +0000 UTC" firstStartedPulling="2026-01-21 11:32:35.304576503 +0000 UTC m=+2142.564532982" lastFinishedPulling="2026-01-21 11:32:35.801559061 +0000 UTC m=+2143.061515530" observedRunningTime="2026-01-21 11:32:36.342026894 +0000 UTC m=+2143.601983383" watchObservedRunningTime="2026-01-21 11:32:36.3479595 +0000 UTC m=+2143.607915969" Jan 21 11:32:42 crc kubenswrapper[4881]: I0121 11:32:42.046766 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f7mmp"] Jan 21 11:32:42 crc kubenswrapper[4881]: I0121 11:32:42.055956 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f7mmp"] Jan 21 11:32:43 crc kubenswrapper[4881]: I0121 11:32:43.339689 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c22e38-1b3d-44b8-9519-0769200d708b" path="/var/lib/kubelet/pods/16c22e38-1b3d-44b8-9519-0769200d708b/volumes" Jan 21 11:32:59 crc kubenswrapper[4881]: I0121 11:32:59.851770 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:32:59 crc kubenswrapper[4881]: I0121 11:32:59.852566 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:32:59 crc kubenswrapper[4881]: I0121 11:32:59.852639 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:32:59 crc kubenswrapper[4881]: I0121 11:32:59.854092 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:32:59 crc kubenswrapper[4881]: I0121 11:32:59.854206 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274" gracePeriod=600 Jan 21 11:33:00 crc kubenswrapper[4881]: I0121 11:33:00.626930 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274" exitCode=0 Jan 21 11:33:00 crc kubenswrapper[4881]: I0121 11:33:00.627452 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274"} Jan 21 11:33:00 crc kubenswrapper[4881]: I0121 11:33:00.627526 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f"} Jan 21 11:33:00 crc kubenswrapper[4881]: I0121 11:33:00.627550 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:33:11 crc kubenswrapper[4881]: I0121 11:33:11.051924 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgqh7"] Jan 21 11:33:11 crc kubenswrapper[4881]: I0121 11:33:11.067722 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgqh7"] Jan 21 11:33:11 crc kubenswrapper[4881]: I0121 11:33:11.329637 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" path="/var/lib/kubelet/pods/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad/volumes" Jan 21 11:33:13 crc kubenswrapper[4881]: I0121 11:33:13.031229 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sf7xj"] Jan 21 11:33:13 crc kubenswrapper[4881]: I0121 11:33:13.047871 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sf7xj"] Jan 21 11:33:13 crc kubenswrapper[4881]: I0121 11:33:13.324429 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813d73da-18da-40fa-b949-bbeec6604ac9" path="/var/lib/kubelet/pods/813d73da-18da-40fa-b949-bbeec6604ac9/volumes" Jan 21 11:33:30 crc kubenswrapper[4881]: I0121 11:33:30.791676 4881 scope.go:117] "RemoveContainer" containerID="45d2c9cf95b1e6ab35e425681a61a8e4775263f35ab1c8463912de139e00b535" Jan 21 11:33:30 crc kubenswrapper[4881]: I0121 11:33:30.877508 4881 scope.go:117] "RemoveContainer" containerID="0055b21217090cd15d9d0b17356b22b40f32a70cf1a35f1e9043b6cc9a7f1186" Jan 21 11:33:30 crc kubenswrapper[4881]: I0121 11:33:30.945247 4881 scope.go:117] "RemoveContainer" containerID="02004fbf2f26b53236286799b468ab78450f8557fc37a01d6e78bf2e7876befc" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.105333 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.109221 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.116963 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.269582 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.269679 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.270399 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhnkc\" (UniqueName: \"kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.583069 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.583197 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.583324 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhnkc\" (UniqueName: \"kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.587735 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.591188 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.609675 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhnkc\" (UniqueName: \"kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.743140 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:40 crc kubenswrapper[4881]: I0121 11:33:40.283820 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:41 crc kubenswrapper[4881]: I0121 11:33:41.164921 4881 generic.go:334] "Generic (PLEG): container finished" podID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerID="da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101" exitCode=0 Jan 21 11:33:41 crc kubenswrapper[4881]: I0121 11:33:41.165034 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerDied","Data":"da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101"} Jan 21 11:33:41 crc kubenswrapper[4881]: I0121 11:33:41.165131 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerStarted","Data":"25bb6209d91174507b7f8c32f8e2ad4514560130ba6ed8ac62902a3fc7a9a941"} Jan 21 11:33:43 crc kubenswrapper[4881]: I0121 11:33:43.184769 4881 generic.go:334] "Generic (PLEG): container finished" podID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerID="c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7" exitCode=0 Jan 21 11:33:43 crc kubenswrapper[4881]: I0121 11:33:43.184874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerDied","Data":"c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7"} Jan 21 11:33:44 crc kubenswrapper[4881]: I0121 11:33:44.196743 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerStarted","Data":"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2"} Jan 21 11:33:44 crc kubenswrapper[4881]: I0121 11:33:44.227751 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7jd4s" podStartSLOduration=2.780929822 podStartE2EDuration="5.227723931s" podCreationTimestamp="2026-01-21 11:33:39 +0000 UTC" firstStartedPulling="2026-01-21 11:33:41.167928747 +0000 UTC m=+2208.427885216" lastFinishedPulling="2026-01-21 11:33:43.614722856 +0000 UTC m=+2210.874679325" observedRunningTime="2026-01-21 11:33:44.217438847 +0000 UTC m=+2211.477395326" watchObservedRunningTime="2026-01-21 11:33:44.227723931 +0000 UTC m=+2211.487680410" Jan 21 11:33:49 crc kubenswrapper[4881]: I0121 11:33:49.743611 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:49 crc kubenswrapper[4881]: I0121 11:33:49.746423 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:49 crc kubenswrapper[4881]: I0121 11:33:49.792728 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:50 crc kubenswrapper[4881]: I0121 11:33:50.311370 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:50 crc kubenswrapper[4881]: I0121 11:33:50.373304 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:52 crc kubenswrapper[4881]: I0121 11:33:52.272412 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7jd4s" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="registry-server" containerID="cri-o://564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2" gracePeriod=2 Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.101753 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.158818 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhnkc\" (UniqueName: \"kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc\") pod \"067c1d92-f45d-4b2d-978c-7db14c5db12c\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.158945 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities\") pod \"067c1d92-f45d-4b2d-978c-7db14c5db12c\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.159091 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content\") pod \"067c1d92-f45d-4b2d-978c-7db14c5db12c\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.162882 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities" (OuterVolumeSpecName: "utilities") pod "067c1d92-f45d-4b2d-978c-7db14c5db12c" (UID: "067c1d92-f45d-4b2d-978c-7db14c5db12c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.190209 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc" (OuterVolumeSpecName: "kube-api-access-xhnkc") pod "067c1d92-f45d-4b2d-978c-7db14c5db12c" (UID: "067c1d92-f45d-4b2d-978c-7db14c5db12c"). InnerVolumeSpecName "kube-api-access-xhnkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.273944 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhnkc\" (UniqueName: \"kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc\") on node \"crc\" DevicePath \"\"" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.273999 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.282046 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "067c1d92-f45d-4b2d-978c-7db14c5db12c" (UID: "067c1d92-f45d-4b2d-978c-7db14c5db12c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.324307 4881 generic.go:334] "Generic (PLEG): container finished" podID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerID="564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2" exitCode=0 Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.324430 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.339241 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerDied","Data":"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2"} Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.349894 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerDied","Data":"25bb6209d91174507b7f8c32f8e2ad4514560130ba6ed8ac62902a3fc7a9a941"} Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.349924 4881 scope.go:117] "RemoveContainer" containerID="564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.375878 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.407363 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.432505 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.444958 4881 scope.go:117] "RemoveContainer" containerID="c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.546522 4881 scope.go:117] "RemoveContainer" containerID="da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.567004 4881 scope.go:117] "RemoveContainer" containerID="564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2" Jan 21 11:33:53 crc kubenswrapper[4881]: E0121 11:33:53.568718 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2\": container with ID starting with 564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2 not found: ID does not exist" containerID="564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.568773 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2"} err="failed to get container status \"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2\": rpc error: code = NotFound desc = could not find container \"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2\": container with ID starting with 564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2 not found: ID does not exist" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.568828 4881 scope.go:117] "RemoveContainer" containerID="c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7" Jan 21 11:33:53 crc kubenswrapper[4881]: E0121 11:33:53.569142 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7\": container with ID starting with c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7 not found: ID does not exist" containerID="c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.569190 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7"} err="failed to get container status \"c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7\": rpc error: code = NotFound desc = could not find container \"c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7\": container with ID starting with c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7 not found: ID does not exist" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.569211 4881 scope.go:117] "RemoveContainer" containerID="da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101" Jan 21 11:33:53 crc kubenswrapper[4881]: E0121 11:33:53.569450 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101\": container with ID starting with da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101 not found: ID does not exist" containerID="da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.569487 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101"} err="failed to get container status \"da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101\": rpc error: code = NotFound desc = could not find container \"da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101\": container with ID starting with da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101 not found: ID does not exist" Jan 21 11:33:55 crc kubenswrapper[4881]: I0121 11:33:55.325535 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" path="/var/lib/kubelet/pods/067c1d92-f45d-4b2d-978c-7db14c5db12c/volumes" Jan 21 11:33:58 crc kubenswrapper[4881]: I0121 11:33:58.040844 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bdc49"] Jan 21 11:33:58 crc kubenswrapper[4881]: I0121 11:33:58.056654 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bdc49"] Jan 21 11:33:59 crc kubenswrapper[4881]: I0121 11:33:59.522314 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" path="/var/lib/kubelet/pods/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a/volumes" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.306619 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:09 crc kubenswrapper[4881]: E0121 11:34:09.307532 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="extract-content" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.307544 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="extract-content" Jan 21 11:34:09 crc kubenswrapper[4881]: E0121 11:34:09.307557 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="extract-utilities" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.307566 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="extract-utilities" Jan 21 11:34:09 crc kubenswrapper[4881]: E0121 11:34:09.307598 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="registry-server" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.307605 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="registry-server" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.307810 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="registry-server" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.309241 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.341704 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.371077 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz9xk\" (UniqueName: \"kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.371249 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.371286 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.507817 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.507879 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.508038 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz9xk\" (UniqueName: \"kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.508513 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.508850 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.541108 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz9xk\" (UniqueName: \"kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.650478 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:10 crc kubenswrapper[4881]: I0121 11:34:10.199355 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:11 crc kubenswrapper[4881]: I0121 11:34:11.045005 4881 generic.go:334] "Generic (PLEG): container finished" podID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerID="d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4" exitCode=0 Jan 21 11:34:11 crc kubenswrapper[4881]: I0121 11:34:11.045075 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerDied","Data":"d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4"} Jan 21 11:34:11 crc kubenswrapper[4881]: I0121 11:34:11.045359 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerStarted","Data":"db8e6c89e98fa09300373151d8b1fe224bb54f6db3db3ee5e913299b110c67d8"} Jan 21 11:34:11 crc kubenswrapper[4881]: I0121 11:34:11.047718 4881 generic.go:334] "Generic (PLEG): container finished" podID="24a093f9-cd67-48f9-a18b-48d1a79a8aa0" containerID="ed91e50a3880cb037a332efeeea663c905f6d34b8520e7608505f8f61898c93d" exitCode=0 Jan 21 11:34:11 crc kubenswrapper[4881]: I0121 11:34:11.047767 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" event={"ID":"24a093f9-cd67-48f9-a18b-48d1a79a8aa0","Type":"ContainerDied","Data":"ed91e50a3880cb037a332efeeea663c905f6d34b8520e7608505f8f61898c93d"} Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.056971 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerStarted","Data":"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d"} Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.530872 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.602616 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwwm9\" (UniqueName: \"kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9\") pod \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.602732 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory\") pod \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.602823 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam\") pod \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.612183 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9" (OuterVolumeSpecName: "kube-api-access-wwwm9") pod "24a093f9-cd67-48f9-a18b-48d1a79a8aa0" (UID: "24a093f9-cd67-48f9-a18b-48d1a79a8aa0"). InnerVolumeSpecName "kube-api-access-wwwm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.632941 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory" (OuterVolumeSpecName: "inventory") pod "24a093f9-cd67-48f9-a18b-48d1a79a8aa0" (UID: "24a093f9-cd67-48f9-a18b-48d1a79a8aa0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.637888 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24a093f9-cd67-48f9-a18b-48d1a79a8aa0" (UID: "24a093f9-cd67-48f9-a18b-48d1a79a8aa0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.711570 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.711604 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.711615 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwwm9\" (UniqueName: \"kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.195245 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" event={"ID":"24a093f9-cd67-48f9-a18b-48d1a79a8aa0","Type":"ContainerDied","Data":"7a8fa3b39b588ac3bed4bee992d7ff3c312e5258aac1318986c1e1881a279a1c"} Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.195688 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8fa3b39b588ac3bed4bee992d7ff3c312e5258aac1318986c1e1881a279a1c" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.195801 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.204271 4881 generic.go:334] "Generic (PLEG): container finished" podID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerID="69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d" exitCode=0 Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.204328 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerDied","Data":"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d"} Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.272929 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp"] Jan 21 11:34:13 crc kubenswrapper[4881]: E0121 11:34:13.273744 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a093f9-cd67-48f9-a18b-48d1a79a8aa0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.273796 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a093f9-cd67-48f9-a18b-48d1a79a8aa0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.274114 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a093f9-cd67-48f9-a18b-48d1a79a8aa0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.279427 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.289077 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.289316 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.289512 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.289675 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.331228 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp"] Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.363118 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lm6m\" (UniqueName: \"kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.363175 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.363387 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.467042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lm6m\" (UniqueName: \"kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.467131 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.467227 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.472526 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.473288 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.491600 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lm6m\" (UniqueName: \"kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.617541 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: W0121 11:34:13.991399 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec204ea7_b207_409b_8fa0_ff2847f7400a.slice/crio-2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad WatchSource:0}: Error finding container 2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad: Status 404 returned error can't find the container with id 2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.992890 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp"] Jan 21 11:34:14 crc kubenswrapper[4881]: I0121 11:34:14.217911 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerStarted","Data":"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6"} Jan 21 11:34:14 crc kubenswrapper[4881]: I0121 11:34:14.219159 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" event={"ID":"ec204ea7-b207-409b-8fa0-ff2847f7400a","Type":"ContainerStarted","Data":"2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad"} Jan 21 11:34:14 crc kubenswrapper[4881]: I0121 11:34:14.261169 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4vlh9" podStartSLOduration=2.623129703 podStartE2EDuration="5.261150857s" podCreationTimestamp="2026-01-21 11:34:09 +0000 UTC" firstStartedPulling="2026-01-21 11:34:11.047660816 +0000 UTC m=+2238.307617295" lastFinishedPulling="2026-01-21 11:34:13.68568198 +0000 UTC m=+2240.945638449" observedRunningTime="2026-01-21 11:34:14.256999475 +0000 UTC m=+2241.516955944" watchObservedRunningTime="2026-01-21 11:34:14.261150857 +0000 UTC m=+2241.521107326" Jan 21 11:34:15 crc kubenswrapper[4881]: I0121 11:34:15.234760 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" event={"ID":"ec204ea7-b207-409b-8fa0-ff2847f7400a","Type":"ContainerStarted","Data":"16130ddaa7d6120624e03973b67c3a94a50f4edd014c457d5948bdfe0654d13c"} Jan 21 11:34:15 crc kubenswrapper[4881]: I0121 11:34:15.264986 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" podStartSLOduration=1.708805807 podStartE2EDuration="2.264956307s" podCreationTimestamp="2026-01-21 11:34:13 +0000 UTC" firstStartedPulling="2026-01-21 11:34:13.994434348 +0000 UTC m=+2241.254390817" lastFinishedPulling="2026-01-21 11:34:14.550584848 +0000 UTC m=+2241.810541317" observedRunningTime="2026-01-21 11:34:15.252416297 +0000 UTC m=+2242.512372766" watchObservedRunningTime="2026-01-21 11:34:15.264956307 +0000 UTC m=+2242.524912776" Jan 21 11:34:19 crc kubenswrapper[4881]: I0121 11:34:19.651335 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:19 crc kubenswrapper[4881]: I0121 11:34:19.653159 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:19 crc kubenswrapper[4881]: I0121 11:34:19.736090 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:20 crc kubenswrapper[4881]: I0121 11:34:20.350071 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:20 crc kubenswrapper[4881]: I0121 11:34:20.414939 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:21 crc kubenswrapper[4881]: I0121 11:34:21.299103 4881 generic.go:334] "Generic (PLEG): container finished" podID="ec204ea7-b207-409b-8fa0-ff2847f7400a" containerID="16130ddaa7d6120624e03973b67c3a94a50f4edd014c457d5948bdfe0654d13c" exitCode=0 Jan 21 11:34:21 crc kubenswrapper[4881]: I0121 11:34:21.299190 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" event={"ID":"ec204ea7-b207-409b-8fa0-ff2847f7400a","Type":"ContainerDied","Data":"16130ddaa7d6120624e03973b67c3a94a50f4edd014c457d5948bdfe0654d13c"} Jan 21 11:34:22 crc kubenswrapper[4881]: I0121 11:34:22.308398 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4vlh9" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="registry-server" containerID="cri-o://9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6" gracePeriod=2 Jan 21 11:34:22 crc kubenswrapper[4881]: I0121 11:34:22.822042 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:22 crc kubenswrapper[4881]: I0121 11:34:22.994871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory\") pod \"ec204ea7-b207-409b-8fa0-ff2847f7400a\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " Jan 21 11:34:22 crc kubenswrapper[4881]: I0121 11:34:22.995308 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lm6m\" (UniqueName: \"kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m\") pod \"ec204ea7-b207-409b-8fa0-ff2847f7400a\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " Jan 21 11:34:22 crc kubenswrapper[4881]: I0121 11:34:22.995375 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam\") pod \"ec204ea7-b207-409b-8fa0-ff2847f7400a\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.007899 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m" (OuterVolumeSpecName: "kube-api-access-7lm6m") pod "ec204ea7-b207-409b-8fa0-ff2847f7400a" (UID: "ec204ea7-b207-409b-8fa0-ff2847f7400a"). InnerVolumeSpecName "kube-api-access-7lm6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.026804 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ec204ea7-b207-409b-8fa0-ff2847f7400a" (UID: "ec204ea7-b207-409b-8fa0-ff2847f7400a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.031230 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory" (OuterVolumeSpecName: "inventory") pod "ec204ea7-b207-409b-8fa0-ff2847f7400a" (UID: "ec204ea7-b207-409b-8fa0-ff2847f7400a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.098163 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lm6m\" (UniqueName: \"kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.098226 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.098242 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.591491 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.636323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" event={"ID":"ec204ea7-b207-409b-8fa0-ff2847f7400a","Type":"ContainerDied","Data":"2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad"} Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.636377 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.636606 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.648965 4881 generic.go:334] "Generic (PLEG): container finished" podID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerID="9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6" exitCode=0 Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.649040 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerDied","Data":"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6"} Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.649083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerDied","Data":"db8e6c89e98fa09300373151d8b1fe224bb54f6db3db3ee5e913299b110c67d8"} Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.649107 4881 scope.go:117] "RemoveContainer" containerID="9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.649399 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.701110 4881 scope.go:117] "RemoveContainer" containerID="69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.733226 4881 scope.go:117] "RemoveContainer" containerID="d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.775372 4881 scope.go:117] "RemoveContainer" containerID="9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.777650 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6\": container with ID starting with 9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6 not found: ID does not exist" containerID="9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.777687 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6"} err="failed to get container status \"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6\": rpc error: code = NotFound desc = could not find container \"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6\": container with ID starting with 9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6 not found: ID does not exist" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.777712 4881 scope.go:117] "RemoveContainer" containerID="69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.778179 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d\": container with ID starting with 69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d not found: ID does not exist" containerID="69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.778207 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d"} err="failed to get container status \"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d\": rpc error: code = NotFound desc = could not find container \"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d\": container with ID starting with 69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d not found: ID does not exist" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.778224 4881 scope.go:117] "RemoveContainer" containerID="d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.778653 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4\": container with ID starting with d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4 not found: ID does not exist" containerID="d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.778686 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4"} err="failed to get container status \"d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4\": rpc error: code = NotFound desc = could not find container \"d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4\": container with ID starting with d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4 not found: ID does not exist" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.795310 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content\") pod \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.795438 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz9xk\" (UniqueName: \"kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk\") pod \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.795596 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities\") pod \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.797837 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities" (OuterVolumeSpecName: "utilities") pod "4b51ea6d-7925-4ba0-af48-901f9ef8f774" (UID: "4b51ea6d-7925-4ba0-af48-901f9ef8f774"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.803174 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk" (OuterVolumeSpecName: "kube-api-access-zz9xk") pod "4b51ea6d-7925-4ba0-af48-901f9ef8f774" (UID: "4b51ea6d-7925-4ba0-af48-901f9ef8f774"). InnerVolumeSpecName "kube-api-access-zz9xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.840611 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl"] Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.841163 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="extract-utilities" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841182 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="extract-utilities" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.841195 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec204ea7-b207-409b-8fa0-ff2847f7400a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841203 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec204ea7-b207-409b-8fa0-ff2847f7400a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.841221 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="extract-content" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841227 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="extract-content" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.841249 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="registry-server" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841255 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="registry-server" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841441 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec204ea7-b207-409b-8fa0-ff2847f7400a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841457 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="registry-server" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.842194 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.845237 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.845369 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.845516 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.846297 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.852191 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl"] Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.867479 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b51ea6d-7925-4ba0-af48-901f9ef8f774" (UID: "4b51ea6d-7925-4ba0-af48-901f9ef8f774"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.899440 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.899676 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz9xk\" (UniqueName: \"kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.899743 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.987702 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.996677 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.008637 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg4tq\" (UniqueName: \"kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.008760 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.008857 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.113011 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.113162 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg4tq\" (UniqueName: \"kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.113412 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.116888 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.117082 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.131320 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg4tq\" (UniqueName: \"kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.222862 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.965685 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl"] Jan 21 11:34:25 crc kubenswrapper[4881]: I0121 11:34:25.325721 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" path="/var/lib/kubelet/pods/4b51ea6d-7925-4ba0-af48-901f9ef8f774/volumes" Jan 21 11:34:25 crc kubenswrapper[4881]: I0121 11:34:25.684865 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" event={"ID":"3880ebda-d882-4e35-89e7-ef739a423a7d","Type":"ContainerStarted","Data":"7714267ec3dc1640c123557117fbc7bea0a5f6ebfaf06413867f22000ae2f1bc"} Jan 21 11:34:25 crc kubenswrapper[4881]: I0121 11:34:25.684924 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" event={"ID":"3880ebda-d882-4e35-89e7-ef739a423a7d","Type":"ContainerStarted","Data":"8d72a218bfa949867c619b3098aa191e472babf9948808437235ab0bbda32186"} Jan 21 11:34:25 crc kubenswrapper[4881]: I0121 11:34:25.715922 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" podStartSLOduration=2.284404114 podStartE2EDuration="2.715899224s" podCreationTimestamp="2026-01-21 11:34:23 +0000 UTC" firstStartedPulling="2026-01-21 11:34:24.972086318 +0000 UTC m=+2252.232042787" lastFinishedPulling="2026-01-21 11:34:25.403581428 +0000 UTC m=+2252.663537897" observedRunningTime="2026-01-21 11:34:25.699704935 +0000 UTC m=+2252.959661424" watchObservedRunningTime="2026-01-21 11:34:25.715899224 +0000 UTC m=+2252.975855703" Jan 21 11:34:31 crc kubenswrapper[4881]: I0121 11:34:31.053928 4881 scope.go:117] "RemoveContainer" containerID="62b5fd9972946ab2305558cba9c0d54f5b29b725654cb25337e61434a431d9ea" Jan 21 11:35:15 crc kubenswrapper[4881]: I0121 11:35:15.356273 4881 generic.go:334] "Generic (PLEG): container finished" podID="3880ebda-d882-4e35-89e7-ef739a423a7d" containerID="7714267ec3dc1640c123557117fbc7bea0a5f6ebfaf06413867f22000ae2f1bc" exitCode=0 Jan 21 11:35:15 crc kubenswrapper[4881]: I0121 11:35:15.356403 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" event={"ID":"3880ebda-d882-4e35-89e7-ef739a423a7d","Type":"ContainerDied","Data":"7714267ec3dc1640c123557117fbc7bea0a5f6ebfaf06413867f22000ae2f1bc"} Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.812368 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.945900 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory\") pod \"3880ebda-d882-4e35-89e7-ef739a423a7d\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.946172 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg4tq\" (UniqueName: \"kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq\") pod \"3880ebda-d882-4e35-89e7-ef739a423a7d\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.946260 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam\") pod \"3880ebda-d882-4e35-89e7-ef739a423a7d\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.951586 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq" (OuterVolumeSpecName: "kube-api-access-mg4tq") pod "3880ebda-d882-4e35-89e7-ef739a423a7d" (UID: "3880ebda-d882-4e35-89e7-ef739a423a7d"). InnerVolumeSpecName "kube-api-access-mg4tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.975820 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3880ebda-d882-4e35-89e7-ef739a423a7d" (UID: "3880ebda-d882-4e35-89e7-ef739a423a7d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.976421 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory" (OuterVolumeSpecName: "inventory") pod "3880ebda-d882-4e35-89e7-ef739a423a7d" (UID: "3880ebda-d882-4e35-89e7-ef739a423a7d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.049126 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.049165 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.049178 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg4tq\" (UniqueName: \"kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq\") on node \"crc\" DevicePath \"\"" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.378271 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" event={"ID":"3880ebda-d882-4e35-89e7-ef739a423a7d","Type":"ContainerDied","Data":"8d72a218bfa949867c619b3098aa191e472babf9948808437235ab0bbda32186"} Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.378318 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.378327 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d72a218bfa949867c619b3098aa191e472babf9948808437235ab0bbda32186" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.484940 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r"] Jan 21 11:35:17 crc kubenswrapper[4881]: E0121 11:35:17.486247 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3880ebda-d882-4e35-89e7-ef739a423a7d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.486331 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3880ebda-d882-4e35-89e7-ef739a423a7d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.486613 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3880ebda-d882-4e35-89e7-ef739a423a7d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.487440 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.491428 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.491733 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.499307 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r"] Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.533291 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.533631 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.663014 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.663150 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccwx\" (UniqueName: \"kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.663535 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.765500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.765558 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.765633 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rccwx\" (UniqueName: \"kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.772832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.773040 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.785472 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rccwx\" (UniqueName: \"kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.847040 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:18 crc kubenswrapper[4881]: I0121 11:35:18.409142 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r"] Jan 21 11:35:19 crc kubenswrapper[4881]: I0121 11:35:19.405920 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" event={"ID":"f96dcee4-7734-4166-9a01-443c6ee66f86","Type":"ContainerStarted","Data":"0e14bee8e522916ea5670966d0aff696ff982885f93ca6e554dfbd5aec6d5c80"} Jan 21 11:35:19 crc kubenswrapper[4881]: I0121 11:35:19.406468 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" event={"ID":"f96dcee4-7734-4166-9a01-443c6ee66f86","Type":"ContainerStarted","Data":"c2b4105bf60b3cd2cbbdb22ac6c4b2b563ce2ee61089eacc780ff88d5f4eeae1"} Jan 21 11:35:19 crc kubenswrapper[4881]: I0121 11:35:19.434455 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" podStartSLOduration=2.009433461 podStartE2EDuration="2.43442422s" podCreationTimestamp="2026-01-21 11:35:17 +0000 UTC" firstStartedPulling="2026-01-21 11:35:18.410874362 +0000 UTC m=+2305.670830831" lastFinishedPulling="2026-01-21 11:35:18.835865121 +0000 UTC m=+2306.095821590" observedRunningTime="2026-01-21 11:35:19.424679139 +0000 UTC m=+2306.684635608" watchObservedRunningTime="2026-01-21 11:35:19.43442422 +0000 UTC m=+2306.694380689" Jan 21 11:35:29 crc kubenswrapper[4881]: I0121 11:35:29.850724 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:35:29 crc kubenswrapper[4881]: I0121 11:35:29.851273 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:35:59 crc kubenswrapper[4881]: I0121 11:35:59.851658 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:35:59 crc kubenswrapper[4881]: I0121 11:35:59.852253 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:36:19 crc kubenswrapper[4881]: I0121 11:36:19.101368 4881 generic.go:334] "Generic (PLEG): container finished" podID="f96dcee4-7734-4166-9a01-443c6ee66f86" containerID="0e14bee8e522916ea5670966d0aff696ff982885f93ca6e554dfbd5aec6d5c80" exitCode=0 Jan 21 11:36:19 crc kubenswrapper[4881]: I0121 11:36:19.101459 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" event={"ID":"f96dcee4-7734-4166-9a01-443c6ee66f86","Type":"ContainerDied","Data":"0e14bee8e522916ea5670966d0aff696ff982885f93ca6e554dfbd5aec6d5c80"} Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.567639 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.698672 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam\") pod \"f96dcee4-7734-4166-9a01-443c6ee66f86\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.698933 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory\") pod \"f96dcee4-7734-4166-9a01-443c6ee66f86\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.698988 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rccwx\" (UniqueName: \"kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx\") pod \"f96dcee4-7734-4166-9a01-443c6ee66f86\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.705609 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx" (OuterVolumeSpecName: "kube-api-access-rccwx") pod "f96dcee4-7734-4166-9a01-443c6ee66f86" (UID: "f96dcee4-7734-4166-9a01-443c6ee66f86"). InnerVolumeSpecName "kube-api-access-rccwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.730130 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory" (OuterVolumeSpecName: "inventory") pod "f96dcee4-7734-4166-9a01-443c6ee66f86" (UID: "f96dcee4-7734-4166-9a01-443c6ee66f86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.739601 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f96dcee4-7734-4166-9a01-443c6ee66f86" (UID: "f96dcee4-7734-4166-9a01-443c6ee66f86"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.801275 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.801325 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rccwx\" (UniqueName: \"kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.801340 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.127584 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" event={"ID":"f96dcee4-7734-4166-9a01-443c6ee66f86","Type":"ContainerDied","Data":"c2b4105bf60b3cd2cbbdb22ac6c4b2b563ce2ee61089eacc780ff88d5f4eeae1"} Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.127649 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2b4105bf60b3cd2cbbdb22ac6c4b2b563ce2ee61089eacc780ff88d5f4eeae1" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.127676 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.230095 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dd2hk"] Jan 21 11:36:21 crc kubenswrapper[4881]: E0121 11:36:21.230809 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f96dcee4-7734-4166-9a01-443c6ee66f86" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.230832 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f96dcee4-7734-4166-9a01-443c6ee66f86" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.231122 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f96dcee4-7734-4166-9a01-443c6ee66f86" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.232165 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.236096 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.236255 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.236282 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.236946 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.248702 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dd2hk"] Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.312665 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.312765 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.312935 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcxvz\" (UniqueName: \"kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.415587 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.415742 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcxvz\" (UniqueName: \"kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.417032 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.422682 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.424109 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.433822 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcxvz\" (UniqueName: \"kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.587109 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:22 crc kubenswrapper[4881]: I0121 11:36:22.132482 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dd2hk"] Jan 21 11:36:23 crc kubenswrapper[4881]: I0121 11:36:23.144953 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" event={"ID":"157a809f-f6fa-43dc-b73d-380976da1312","Type":"ContainerStarted","Data":"f1db5909ded55b74a3536abb2e28180a19052deddcecbb0f0ed78e60d78a0e4f"} Jan 21 11:36:23 crc kubenswrapper[4881]: I0121 11:36:23.145278 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" event={"ID":"157a809f-f6fa-43dc-b73d-380976da1312","Type":"ContainerStarted","Data":"66235c313b4580faaef6c50feeddc7e2004a0ad3aed1911d1a15ba7785f574fc"} Jan 21 11:36:23 crc kubenswrapper[4881]: I0121 11:36:23.167644 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" podStartSLOduration=1.713326981 podStartE2EDuration="2.167620988s" podCreationTimestamp="2026-01-21 11:36:21 +0000 UTC" firstStartedPulling="2026-01-21 11:36:22.151919785 +0000 UTC m=+2369.411876254" lastFinishedPulling="2026-01-21 11:36:22.606213792 +0000 UTC m=+2369.866170261" observedRunningTime="2026-01-21 11:36:23.160053704 +0000 UTC m=+2370.420010173" watchObservedRunningTime="2026-01-21 11:36:23.167620988 +0000 UTC m=+2370.427577457" Jan 21 11:36:29 crc kubenswrapper[4881]: I0121 11:36:29.851401 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:36:29 crc kubenswrapper[4881]: I0121 11:36:29.852458 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:36:29 crc kubenswrapper[4881]: I0121 11:36:29.852546 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:36:29 crc kubenswrapper[4881]: I0121 11:36:29.853423 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:36:29 crc kubenswrapper[4881]: I0121 11:36:29.853485 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" gracePeriod=600 Jan 21 11:36:29 crc kubenswrapper[4881]: E0121 11:36:29.978924 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:36:30 crc kubenswrapper[4881]: I0121 11:36:30.227122 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" exitCode=0 Jan 21 11:36:30 crc kubenswrapper[4881]: I0121 11:36:30.227159 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f"} Jan 21 11:36:30 crc kubenswrapper[4881]: I0121 11:36:30.227223 4881 scope.go:117] "RemoveContainer" containerID="ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274" Jan 21 11:36:30 crc kubenswrapper[4881]: I0121 11:36:30.228046 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:36:30 crc kubenswrapper[4881]: E0121 11:36:30.228379 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.242491 4881 generic.go:334] "Generic (PLEG): container finished" podID="157a809f-f6fa-43dc-b73d-380976da1312" containerID="f1db5909ded55b74a3536abb2e28180a19052deddcecbb0f0ed78e60d78a0e4f" exitCode=0 Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.242582 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" event={"ID":"157a809f-f6fa-43dc-b73d-380976da1312","Type":"ContainerDied","Data":"f1db5909ded55b74a3536abb2e28180a19052deddcecbb0f0ed78e60d78a0e4f"} Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.250280 4881 scope.go:117] "RemoveContainer" containerID="c1eba3ae03b1d6805b90d42d0ec2f798fa4704781a61dbdfa8159f414d7bb80e" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.281058 4881 scope.go:117] "RemoveContainer" containerID="c222168e828ddf8dc31adf5d20e6251d1aebd2db36a121297ee44763be9bc74e" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.344272 4881 scope.go:117] "RemoveContainer" containerID="0ab0a82d406b0a4031e5637f72af69a714ded06513932b035aeb5ac564f21b6b" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.382865 4881 scope.go:117] "RemoveContainer" containerID="d6ee22258af69df6704251a1ea48a067b0aad9b9017145fdec7581e1437ace89" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.403176 4881 scope.go:117] "RemoveContainer" containerID="48d5d26b6c9086a6b947d5294b328f1c7e8f26fa1ce1593b0120714fc18e44b1" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.456965 4881 scope.go:117] "RemoveContainer" containerID="5e0abf8ffd3df2b4543f3b78f4df1de894199c4c001e6db2e5a3872e46d7a54b" Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.728492 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.915850 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcxvz\" (UniqueName: \"kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz\") pod \"157a809f-f6fa-43dc-b73d-380976da1312\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.915977 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam\") pod \"157a809f-f6fa-43dc-b73d-380976da1312\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.916107 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0\") pod \"157a809f-f6fa-43dc-b73d-380976da1312\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.925447 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz" (OuterVolumeSpecName: "kube-api-access-hcxvz") pod "157a809f-f6fa-43dc-b73d-380976da1312" (UID: "157a809f-f6fa-43dc-b73d-380976da1312"). InnerVolumeSpecName "kube-api-access-hcxvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.970038 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "157a809f-f6fa-43dc-b73d-380976da1312" (UID: "157a809f-f6fa-43dc-b73d-380976da1312"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.985981 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "157a809f-f6fa-43dc-b73d-380976da1312" (UID: "157a809f-f6fa-43dc-b73d-380976da1312"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.019355 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcxvz\" (UniqueName: \"kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.019385 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.019398 4881 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.264832 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" event={"ID":"157a809f-f6fa-43dc-b73d-380976da1312","Type":"ContainerDied","Data":"66235c313b4580faaef6c50feeddc7e2004a0ad3aed1911d1a15ba7785f574fc"} Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.264878 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66235c313b4580faaef6c50feeddc7e2004a0ad3aed1911d1a15ba7785f574fc" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.264916 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.370407 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr"] Jan 21 11:36:33 crc kubenswrapper[4881]: E0121 11:36:33.370959 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157a809f-f6fa-43dc-b73d-380976da1312" containerName="ssh-known-hosts-edpm-deployment" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.370977 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="157a809f-f6fa-43dc-b73d-380976da1312" containerName="ssh-known-hosts-edpm-deployment" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.371166 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="157a809f-f6fa-43dc-b73d-380976da1312" containerName="ssh-known-hosts-edpm-deployment" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.371937 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.384484 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.384720 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.385281 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.385428 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.388504 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr"] Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.447056 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.447147 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.447233 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47xx6\" (UniqueName: \"kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.549241 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.549315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.549352 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47xx6\" (UniqueName: \"kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.553816 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.560069 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.567474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47xx6\" (UniqueName: \"kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.698459 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:34 crc kubenswrapper[4881]: I0121 11:36:34.327866 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr"] Jan 21 11:36:35 crc kubenswrapper[4881]: I0121 11:36:35.290677 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" event={"ID":"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d","Type":"ContainerStarted","Data":"409f626ab96ec0faa85083350b4a7d7f3a62c09e89bee9c03ac1296a6549197d"} Jan 21 11:36:35 crc kubenswrapper[4881]: I0121 11:36:35.291180 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" event={"ID":"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d","Type":"ContainerStarted","Data":"a859b9ac6ed5fc21e2f0d9aea74ba2e88254a24acdbcf86471001e1c0e500490"} Jan 21 11:36:35 crc kubenswrapper[4881]: I0121 11:36:35.309095 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" podStartSLOduration=1.860044757 podStartE2EDuration="2.309070995s" podCreationTimestamp="2026-01-21 11:36:33 +0000 UTC" firstStartedPulling="2026-01-21 11:36:34.353110692 +0000 UTC m=+2381.613067161" lastFinishedPulling="2026-01-21 11:36:34.80213691 +0000 UTC m=+2382.062093399" observedRunningTime="2026-01-21 11:36:35.305945928 +0000 UTC m=+2382.565902397" watchObservedRunningTime="2026-01-21 11:36:35.309070995 +0000 UTC m=+2382.569027464" Jan 21 11:36:44 crc kubenswrapper[4881]: I0121 11:36:44.371858 4881 generic.go:334] "Generic (PLEG): container finished" podID="af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" containerID="409f626ab96ec0faa85083350b4a7d7f3a62c09e89bee9c03ac1296a6549197d" exitCode=0 Jan 21 11:36:44 crc kubenswrapper[4881]: I0121 11:36:44.371937 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" event={"ID":"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d","Type":"ContainerDied","Data":"409f626ab96ec0faa85083350b4a7d7f3a62c09e89bee9c03ac1296a6549197d"} Jan 21 11:36:45 crc kubenswrapper[4881]: I0121 11:36:45.311274 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:36:45 crc kubenswrapper[4881]: E0121 11:36:45.311941 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:36:45 crc kubenswrapper[4881]: I0121 11:36:45.876034 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:45 crc kubenswrapper[4881]: I0121 11:36:45.976584 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory\") pod \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " Jan 21 11:36:45 crc kubenswrapper[4881]: I0121 11:36:45.976729 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47xx6\" (UniqueName: \"kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6\") pod \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " Jan 21 11:36:45 crc kubenswrapper[4881]: I0121 11:36:45.983061 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6" (OuterVolumeSpecName: "kube-api-access-47xx6") pod "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" (UID: "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d"). InnerVolumeSpecName "kube-api-access-47xx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.010989 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory" (OuterVolumeSpecName: "inventory") pod "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" (UID: "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.078146 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam\") pod \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.078532 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.078554 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47xx6\" (UniqueName: \"kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.104647 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" (UID: "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.181083 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.393842 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" event={"ID":"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d","Type":"ContainerDied","Data":"a859b9ac6ed5fc21e2f0d9aea74ba2e88254a24acdbcf86471001e1c0e500490"} Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.393901 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a859b9ac6ed5fc21e2f0d9aea74ba2e88254a24acdbcf86471001e1c0e500490" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.393902 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.497391 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn"] Jan 21 11:36:46 crc kubenswrapper[4881]: E0121 11:36:46.498208 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.498236 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.498475 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.499671 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.505663 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.506103 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn"] Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.506642 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.506713 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.509038 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.590718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndnpw\" (UniqueName: \"kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.591043 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.591109 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.692658 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndnpw\" (UniqueName: \"kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.692771 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.692827 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.696878 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.697376 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.709202 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndnpw\" (UniqueName: \"kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.828252 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:47 crc kubenswrapper[4881]: I0121 11:36:47.593424 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn"] Jan 21 11:36:48 crc kubenswrapper[4881]: I0121 11:36:48.410814 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" event={"ID":"828bd055-053d-43b7-b76f-746438bb9b41","Type":"ContainerStarted","Data":"29753d3eba82df008a09044e90acb3b1e9b17ea67ac8abcf21cad2cd4786c8d0"} Jan 21 11:36:49 crc kubenswrapper[4881]: I0121 11:36:49.421621 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" event={"ID":"828bd055-053d-43b7-b76f-746438bb9b41","Type":"ContainerStarted","Data":"6e3e0b0bdb0a610ffbd23e94a352b3de735fe924fe27e0ef3590b79f42b1d2cb"} Jan 21 11:36:49 crc kubenswrapper[4881]: I0121 11:36:49.450601 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" podStartSLOduration=2.754850763 podStartE2EDuration="3.450576623s" podCreationTimestamp="2026-01-21 11:36:46 +0000 UTC" firstStartedPulling="2026-01-21 11:36:47.590512255 +0000 UTC m=+2394.850468724" lastFinishedPulling="2026-01-21 11:36:48.286238115 +0000 UTC m=+2395.546194584" observedRunningTime="2026-01-21 11:36:49.440394404 +0000 UTC m=+2396.700350903" watchObservedRunningTime="2026-01-21 11:36:49.450576623 +0000 UTC m=+2396.710533092" Jan 21 11:36:57 crc kubenswrapper[4881]: I0121 11:36:57.312436 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:36:57 crc kubenswrapper[4881]: E0121 11:36:57.314076 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:36:59 crc kubenswrapper[4881]: I0121 11:36:59.533923 4881 generic.go:334] "Generic (PLEG): container finished" podID="828bd055-053d-43b7-b76f-746438bb9b41" containerID="6e3e0b0bdb0a610ffbd23e94a352b3de735fe924fe27e0ef3590b79f42b1d2cb" exitCode=0 Jan 21 11:36:59 crc kubenswrapper[4881]: I0121 11:36:59.534037 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" event={"ID":"828bd055-053d-43b7-b76f-746438bb9b41","Type":"ContainerDied","Data":"6e3e0b0bdb0a610ffbd23e94a352b3de735fe924fe27e0ef3590b79f42b1d2cb"} Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.008833 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.111522 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory\") pod \"828bd055-053d-43b7-b76f-746438bb9b41\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.111882 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndnpw\" (UniqueName: \"kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw\") pod \"828bd055-053d-43b7-b76f-746438bb9b41\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.111937 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam\") pod \"828bd055-053d-43b7-b76f-746438bb9b41\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.120108 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw" (OuterVolumeSpecName: "kube-api-access-ndnpw") pod "828bd055-053d-43b7-b76f-746438bb9b41" (UID: "828bd055-053d-43b7-b76f-746438bb9b41"). InnerVolumeSpecName "kube-api-access-ndnpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.143022 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "828bd055-053d-43b7-b76f-746438bb9b41" (UID: "828bd055-053d-43b7-b76f-746438bb9b41"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.144925 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory" (OuterVolumeSpecName: "inventory") pod "828bd055-053d-43b7-b76f-746438bb9b41" (UID: "828bd055-053d-43b7-b76f-746438bb9b41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.215069 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndnpw\" (UniqueName: \"kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.215108 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.215123 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.559601 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" event={"ID":"828bd055-053d-43b7-b76f-746438bb9b41","Type":"ContainerDied","Data":"29753d3eba82df008a09044e90acb3b1e9b17ea67ac8abcf21cad2cd4786c8d0"} Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.559645 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29753d3eba82df008a09044e90acb3b1e9b17ea67ac8abcf21cad2cd4786c8d0" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.559663 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.650484 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l"] Jan 21 11:37:01 crc kubenswrapper[4881]: E0121 11:37:01.650924 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828bd055-053d-43b7-b76f-746438bb9b41" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.650943 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="828bd055-053d-43b7-b76f-746438bb9b41" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.651141 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="828bd055-053d-43b7-b76f-746438bb9b41" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.651819 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.654763 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.655164 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.655426 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.655749 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.655952 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.656083 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.660121 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.668118 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.679253 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l"] Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.825681 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.825812 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.825870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.825929 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.825966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826096 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826141 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826177 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826266 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826287 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826333 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826427 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826459 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826570 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtl5j\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.929442 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930375 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930589 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtl5j\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930711 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930779 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930827 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930895 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930934 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930973 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.931041 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.931068 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.936542 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.937407 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.938655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.941281 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.943878 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.944041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.944144 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.944640 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.945215 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.945359 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.945385 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.946129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.949687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtl5j\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.954123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.973337 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:02 crc kubenswrapper[4881]: I0121 11:37:02.560429 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l"] Jan 21 11:37:02 crc kubenswrapper[4881]: I0121 11:37:02.572578 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" event={"ID":"1ef84c59-8554-4369-9f9f-877505b3b952","Type":"ContainerStarted","Data":"b627d71bc3743459a8f29f87d494d94cfa00a3d17cac848e85ffa73ca6514114"} Jan 21 11:37:04 crc kubenswrapper[4881]: I0121 11:37:04.593455 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" event={"ID":"1ef84c59-8554-4369-9f9f-877505b3b952","Type":"ContainerStarted","Data":"1381915837d6170a260b0381bdf5de357458d9bab9d662fd7948a15639c1985e"} Jan 21 11:37:04 crc kubenswrapper[4881]: I0121 11:37:04.627945 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" podStartSLOduration=2.796711032 podStartE2EDuration="3.627914656s" podCreationTimestamp="2026-01-21 11:37:01 +0000 UTC" firstStartedPulling="2026-01-21 11:37:02.55769026 +0000 UTC m=+2409.817646729" lastFinishedPulling="2026-01-21 11:37:03.388893844 +0000 UTC m=+2410.648850353" observedRunningTime="2026-01-21 11:37:04.621711755 +0000 UTC m=+2411.881668224" watchObservedRunningTime="2026-01-21 11:37:04.627914656 +0000 UTC m=+2411.887871125" Jan 21 11:37:09 crc kubenswrapper[4881]: I0121 11:37:09.312660 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:37:09 crc kubenswrapper[4881]: E0121 11:37:09.313691 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:37:20 crc kubenswrapper[4881]: I0121 11:37:20.310685 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:37:20 crc kubenswrapper[4881]: E0121 11:37:20.311766 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:37:33 crc kubenswrapper[4881]: I0121 11:37:33.318472 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:37:33 crc kubenswrapper[4881]: E0121 11:37:33.319209 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:37:47 crc kubenswrapper[4881]: I0121 11:37:47.310539 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:37:47 crc kubenswrapper[4881]: E0121 11:37:47.311288 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:37:48 crc kubenswrapper[4881]: I0121 11:37:48.050457 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ef84c59-8554-4369-9f9f-877505b3b952" containerID="1381915837d6170a260b0381bdf5de357458d9bab9d662fd7948a15639c1985e" exitCode=0 Jan 21 11:37:48 crc kubenswrapper[4881]: I0121 11:37:48.050674 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" event={"ID":"1ef84c59-8554-4369-9f9f-877505b3b952","Type":"ContainerDied","Data":"1381915837d6170a260b0381bdf5de357458d9bab9d662fd7948a15639c1985e"} Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.514036 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587120 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587261 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587278 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587303 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587388 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587418 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587452 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587497 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587547 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587601 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587675 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587712 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587746 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587795 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtl5j\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.595698 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.595730 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.595828 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.596323 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.596475 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j" (OuterVolumeSpecName: "kube-api-access-dtl5j") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "kube-api-access-dtl5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.596538 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.596646 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.598500 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.598963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.600979 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.600995 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.601238 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.624233 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.629538 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory" (OuterVolumeSpecName: "inventory") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691372 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691442 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691460 4881 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691476 4881 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691489 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtl5j\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691500 4881 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691512 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691523 4881 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691535 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691548 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691560 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691571 4881 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691582 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691597 4881 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.074829 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" event={"ID":"1ef84c59-8554-4369-9f9f-877505b3b952","Type":"ContainerDied","Data":"b627d71bc3743459a8f29f87d494d94cfa00a3d17cac848e85ffa73ca6514114"} Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.075249 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b627d71bc3743459a8f29f87d494d94cfa00a3d17cac848e85ffa73ca6514114" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.074923 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.179234 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg"] Jan 21 11:37:50 crc kubenswrapper[4881]: E0121 11:37:50.179674 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef84c59-8554-4369-9f9f-877505b3b952" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.179695 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef84c59-8554-4369-9f9f-877505b3b952" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.179943 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef84c59-8554-4369-9f9f-877505b3b952" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.181151 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.185348 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.185434 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.185433 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.186658 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.187193 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.193798 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg"] Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.306848 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.306935 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.306979 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.307020 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7t5\" (UniqueName: \"kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.307038 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.409223 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.409591 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.409689 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.409728 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc7t5\" (UniqueName: \"kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.409762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.411287 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.414919 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.414995 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.415348 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.430824 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc7t5\" (UniqueName: \"kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.499885 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:51 crc kubenswrapper[4881]: I0121 11:37:51.175189 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:37:51 crc kubenswrapper[4881]: I0121 11:37:51.176426 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg"] Jan 21 11:37:52 crc kubenswrapper[4881]: I0121 11:37:52.102971 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" event={"ID":"11ba18fa-d69e-4a6b-9796-e92d95d702ec","Type":"ContainerStarted","Data":"aa93fb13f72092ec97b0673ec20604bc730432dff0f5669249ccca4c35302da2"} Jan 21 11:37:53 crc kubenswrapper[4881]: I0121 11:37:53.116898 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" event={"ID":"11ba18fa-d69e-4a6b-9796-e92d95d702ec","Type":"ContainerStarted","Data":"f2af05d022273527afe8fbabe5b1e255d94275ede6153a3e7df06926a5b97e4b"} Jan 21 11:37:53 crc kubenswrapper[4881]: I0121 11:37:53.141272 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" podStartSLOduration=2.444332492 podStartE2EDuration="3.141250442s" podCreationTimestamp="2026-01-21 11:37:50 +0000 UTC" firstStartedPulling="2026-01-21 11:37:51.174821143 +0000 UTC m=+2458.434777632" lastFinishedPulling="2026-01-21 11:37:51.871739113 +0000 UTC m=+2459.131695582" observedRunningTime="2026-01-21 11:37:53.139151211 +0000 UTC m=+2460.399107690" watchObservedRunningTime="2026-01-21 11:37:53.141250442 +0000 UTC m=+2460.401206911" Jan 21 11:38:02 crc kubenswrapper[4881]: I0121 11:38:02.312342 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:38:02 crc kubenswrapper[4881]: E0121 11:38:02.313982 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:38:15 crc kubenswrapper[4881]: I0121 11:38:15.311014 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:38:15 crc kubenswrapper[4881]: E0121 11:38:15.311946 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:38:29 crc kubenswrapper[4881]: I0121 11:38:29.311051 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:38:29 crc kubenswrapper[4881]: E0121 11:38:29.311824 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:38:43 crc kubenswrapper[4881]: I0121 11:38:43.319428 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:38:43 crc kubenswrapper[4881]: E0121 11:38:43.320725 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:38:57 crc kubenswrapper[4881]: I0121 11:38:57.311058 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:38:57 crc kubenswrapper[4881]: E0121 11:38:57.311923 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:39:11 crc kubenswrapper[4881]: I0121 11:39:11.216765 4881 generic.go:334] "Generic (PLEG): container finished" podID="11ba18fa-d69e-4a6b-9796-e92d95d702ec" containerID="f2af05d022273527afe8fbabe5b1e255d94275ede6153a3e7df06926a5b97e4b" exitCode=0 Jan 21 11:39:11 crc kubenswrapper[4881]: I0121 11:39:11.216862 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" event={"ID":"11ba18fa-d69e-4a6b-9796-e92d95d702ec","Type":"ContainerDied","Data":"f2af05d022273527afe8fbabe5b1e255d94275ede6153a3e7df06926a5b97e4b"} Jan 21 11:39:11 crc kubenswrapper[4881]: I0121 11:39:11.312219 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:39:11 crc kubenswrapper[4881]: E0121 11:39:11.312666 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.670630 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.813304 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam\") pod \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.813371 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0\") pod \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.813455 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle\") pod \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.813507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory\") pod \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.813561 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc7t5\" (UniqueName: \"kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5\") pod \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.821057 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "11ba18fa-d69e-4a6b-9796-e92d95d702ec" (UID: "11ba18fa-d69e-4a6b-9796-e92d95d702ec"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.821631 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5" (OuterVolumeSpecName: "kube-api-access-jc7t5") pod "11ba18fa-d69e-4a6b-9796-e92d95d702ec" (UID: "11ba18fa-d69e-4a6b-9796-e92d95d702ec"). InnerVolumeSpecName "kube-api-access-jc7t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.857776 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "11ba18fa-d69e-4a6b-9796-e92d95d702ec" (UID: "11ba18fa-d69e-4a6b-9796-e92d95d702ec"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.866420 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11ba18fa-d69e-4a6b-9796-e92d95d702ec" (UID: "11ba18fa-d69e-4a6b-9796-e92d95d702ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.869688 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory" (OuterVolumeSpecName: "inventory") pod "11ba18fa-d69e-4a6b-9796-e92d95d702ec" (UID: "11ba18fa-d69e-4a6b-9796-e92d95d702ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.916656 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.916694 4881 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.916708 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.916726 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.916740 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc7t5\" (UniqueName: \"kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5\") on node \"crc\" DevicePath \"\"" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.234369 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" event={"ID":"11ba18fa-d69e-4a6b-9796-e92d95d702ec","Type":"ContainerDied","Data":"aa93fb13f72092ec97b0673ec20604bc730432dff0f5669249ccca4c35302da2"} Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.234419 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa93fb13f72092ec97b0673ec20604bc730432dff0f5669249ccca4c35302da2" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.234423 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.438486 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp"] Jan 21 11:39:13 crc kubenswrapper[4881]: E0121 11:39:13.439384 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ba18fa-d69e-4a6b-9796-e92d95d702ec" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.439408 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ba18fa-d69e-4a6b-9796-e92d95d702ec" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.439713 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ba18fa-d69e-4a6b-9796-e92d95d702ec" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.440569 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.443156 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.443316 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.443350 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.443438 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.443614 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.445051 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.458996 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp"] Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530259 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530356 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530455 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fz5\" (UniqueName: \"kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530537 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530563 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530600 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.632490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.632839 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.632982 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6fz5\" (UniqueName: \"kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.633118 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.633223 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.633361 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.637957 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.638166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.638249 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.638310 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.638532 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.649695 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6fz5\" (UniqueName: \"kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.757409 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:14 crc kubenswrapper[4881]: I0121 11:39:14.282778 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp"] Jan 21 11:39:15 crc kubenswrapper[4881]: I0121 11:39:15.255628 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" event={"ID":"0e428246-daf9-40a4-9049-74281259f82c","Type":"ContainerStarted","Data":"6aec53e337dc4d6cd6cda8ace05ff6550a2e5c28e5ac964d4579632056bbce09"} Jan 21 11:39:15 crc kubenswrapper[4881]: I0121 11:39:15.256066 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" event={"ID":"0e428246-daf9-40a4-9049-74281259f82c","Type":"ContainerStarted","Data":"608cb0dcffc24c7cbd1b5fbe53fd92536c3ad4a45a9899eb73a91b1b55cde671"} Jan 21 11:39:15 crc kubenswrapper[4881]: I0121 11:39:15.280156 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" podStartSLOduration=1.771935754 podStartE2EDuration="2.280137929s" podCreationTimestamp="2026-01-21 11:39:13 +0000 UTC" firstStartedPulling="2026-01-21 11:39:14.295462614 +0000 UTC m=+2541.555419083" lastFinishedPulling="2026-01-21 11:39:14.803664779 +0000 UTC m=+2542.063621258" observedRunningTime="2026-01-21 11:39:15.274076721 +0000 UTC m=+2542.534033190" watchObservedRunningTime="2026-01-21 11:39:15.280137929 +0000 UTC m=+2542.540094398" Jan 21 11:39:25 crc kubenswrapper[4881]: I0121 11:39:25.311591 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:39:25 crc kubenswrapper[4881]: E0121 11:39:25.314090 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:39:37 crc kubenswrapper[4881]: I0121 11:39:37.311757 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:39:37 crc kubenswrapper[4881]: E0121 11:39:37.312878 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:39:48 crc kubenswrapper[4881]: I0121 11:39:48.311460 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:39:48 crc kubenswrapper[4881]: E0121 11:39:48.312430 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:40:02 crc kubenswrapper[4881]: I0121 11:40:02.310843 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:40:02 crc kubenswrapper[4881]: E0121 11:40:02.311514 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:40:13 crc kubenswrapper[4881]: I0121 11:40:13.317610 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:40:13 crc kubenswrapper[4881]: E0121 11:40:13.318429 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:40:14 crc kubenswrapper[4881]: I0121 11:40:14.155020 4881 generic.go:334] "Generic (PLEG): container finished" podID="0e428246-daf9-40a4-9049-74281259f82c" containerID="6aec53e337dc4d6cd6cda8ace05ff6550a2e5c28e5ac964d4579632056bbce09" exitCode=0 Jan 21 11:40:14 crc kubenswrapper[4881]: I0121 11:40:14.155082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" event={"ID":"0e428246-daf9-40a4-9049-74281259f82c","Type":"ContainerDied","Data":"6aec53e337dc4d6cd6cda8ace05ff6550a2e5c28e5ac964d4579632056bbce09"} Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.614497 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725517 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725628 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725728 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725803 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6fz5\" (UniqueName: \"kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725882 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725923 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.732173 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5" (OuterVolumeSpecName: "kube-api-access-k6fz5") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "kube-api-access-k6fz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.732354 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.754666 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory" (OuterVolumeSpecName: "inventory") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.764832 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.766314 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.772762 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.827980 4881 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.828011 4881 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.828025 4881 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.828034 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.828045 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.828053 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6fz5\" (UniqueName: \"kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.182758 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" event={"ID":"0e428246-daf9-40a4-9049-74281259f82c","Type":"ContainerDied","Data":"608cb0dcffc24c7cbd1b5fbe53fd92536c3ad4a45a9899eb73a91b1b55cde671"} Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.182831 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="608cb0dcffc24c7cbd1b5fbe53fd92536c3ad4a45a9899eb73a91b1b55cde671" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.182901 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.475520 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq"] Jan 21 11:40:16 crc kubenswrapper[4881]: E0121 11:40:16.476157 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e428246-daf9-40a4-9049-74281259f82c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.476172 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e428246-daf9-40a4-9049-74281259f82c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.476377 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e428246-daf9-40a4-9049-74281259f82c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.477367 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.480699 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.481154 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.481480 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.482585 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.496572 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.507451 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq"] Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.599639 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwlv\" (UniqueName: \"kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.599710 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.600108 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.600158 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.600198 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.702436 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwlv\" (UniqueName: \"kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.702503 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.702544 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.702586 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.702623 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.708104 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.708361 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.708655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.712848 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.727085 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwlv\" (UniqueName: \"kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.805867 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:17 crc kubenswrapper[4881]: I0121 11:40:17.487227 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq"] Jan 21 11:40:18 crc kubenswrapper[4881]: I0121 11:40:18.207777 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" event={"ID":"38ac646b-177b-488d-853b-e04b22f267a4","Type":"ContainerStarted","Data":"2e34d3926c62f8cffffc796ec975008bf3545972abcc913f207930e4451b062e"} Jan 21 11:40:18 crc kubenswrapper[4881]: I0121 11:40:18.208411 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" event={"ID":"38ac646b-177b-488d-853b-e04b22f267a4","Type":"ContainerStarted","Data":"dc79678ab6ba1932de7e4e05e7465b949910c18ea04deeee070bef7c91f2f1e4"} Jan 21 11:40:18 crc kubenswrapper[4881]: I0121 11:40:18.410670 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" podStartSLOduration=1.9996820450000001 podStartE2EDuration="2.410646552s" podCreationTimestamp="2026-01-21 11:40:16 +0000 UTC" firstStartedPulling="2026-01-21 11:40:17.497604489 +0000 UTC m=+2604.757560958" lastFinishedPulling="2026-01-21 11:40:17.908568996 +0000 UTC m=+2605.168525465" observedRunningTime="2026-01-21 11:40:18.234593867 +0000 UTC m=+2605.494550336" watchObservedRunningTime="2026-01-21 11:40:18.410646552 +0000 UTC m=+2605.670603021" Jan 21 11:40:24 crc kubenswrapper[4881]: I0121 11:40:24.312105 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:40:24 crc kubenswrapper[4881]: E0121 11:40:24.312861 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:40:35 crc kubenswrapper[4881]: I0121 11:40:35.312033 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:40:35 crc kubenswrapper[4881]: E0121 11:40:35.313103 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:40:50 crc kubenswrapper[4881]: I0121 11:40:50.312528 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:40:50 crc kubenswrapper[4881]: E0121 11:40:50.313744 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:41:01 crc kubenswrapper[4881]: I0121 11:41:01.312088 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:41:01 crc kubenswrapper[4881]: E0121 11:41:01.313408 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:41:14 crc kubenswrapper[4881]: I0121 11:41:14.312262 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:41:14 crc kubenswrapper[4881]: E0121 11:41:14.313165 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:41:29 crc kubenswrapper[4881]: I0121 11:41:29.310769 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:41:29 crc kubenswrapper[4881]: E0121 11:41:29.311678 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:41:43 crc kubenswrapper[4881]: I0121 11:41:43.313880 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:41:44 crc kubenswrapper[4881]: I0121 11:41:44.267753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c"} Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.806640 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.817172 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.820682 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.823816 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.824120 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.824244 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk4qx\" (UniqueName: \"kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.926186 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk4qx\" (UniqueName: \"kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.926315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.926448 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.927024 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.927316 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.948687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk4qx\" (UniqueName: \"kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:42 crc kubenswrapper[4881]: I0121 11:43:42.144604 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:42 crc kubenswrapper[4881]: I0121 11:43:42.854468 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:42 crc kubenswrapper[4881]: W0121 11:43:42.872223 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef5440b_a4c3_4e04_8e02_1055391021c7.slice/crio-3a13beca094a99c47c09db4ac9ab1071bf5ac21528dbfec02027e2662cc93ceb WatchSource:0}: Error finding container 3a13beca094a99c47c09db4ac9ab1071bf5ac21528dbfec02027e2662cc93ceb: Status 404 returned error can't find the container with id 3a13beca094a99c47c09db4ac9ab1071bf5ac21528dbfec02027e2662cc93ceb Jan 21 11:43:43 crc kubenswrapper[4881]: I0121 11:43:43.376444 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerID="c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187" exitCode=0 Jan 21 11:43:43 crc kubenswrapper[4881]: I0121 11:43:43.376561 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerDied","Data":"c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187"} Jan 21 11:43:43 crc kubenswrapper[4881]: I0121 11:43:43.376748 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerStarted","Data":"3a13beca094a99c47c09db4ac9ab1071bf5ac21528dbfec02027e2662cc93ceb"} Jan 21 11:43:43 crc kubenswrapper[4881]: I0121 11:43:43.381218 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:43:44 crc kubenswrapper[4881]: I0121 11:43:44.390899 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerStarted","Data":"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21"} Jan 21 11:43:46 crc kubenswrapper[4881]: I0121 11:43:46.330305 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerID="74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21" exitCode=0 Jan 21 11:43:46 crc kubenswrapper[4881]: I0121 11:43:46.335802 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerDied","Data":"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21"} Jan 21 11:43:47 crc kubenswrapper[4881]: I0121 11:43:47.343296 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerStarted","Data":"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c"} Jan 21 11:43:47 crc kubenswrapper[4881]: I0121 11:43:47.411153 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n45jf" podStartSLOduration=3.035493656 podStartE2EDuration="6.4111189s" podCreationTimestamp="2026-01-21 11:43:41 +0000 UTC" firstStartedPulling="2026-01-21 11:43:43.380963802 +0000 UTC m=+2810.640920271" lastFinishedPulling="2026-01-21 11:43:46.756589046 +0000 UTC m=+2814.016545515" observedRunningTime="2026-01-21 11:43:47.396075912 +0000 UTC m=+2814.656032401" watchObservedRunningTime="2026-01-21 11:43:47.4111189 +0000 UTC m=+2814.671075389" Jan 21 11:43:52 crc kubenswrapper[4881]: I0121 11:43:52.145543 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:52 crc kubenswrapper[4881]: I0121 11:43:52.146637 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:52 crc kubenswrapper[4881]: I0121 11:43:52.227921 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:52 crc kubenswrapper[4881]: I0121 11:43:52.697818 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:52 crc kubenswrapper[4881]: I0121 11:43:52.748567 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:54 crc kubenswrapper[4881]: I0121 11:43:54.668418 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n45jf" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="registry-server" containerID="cri-o://a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c" gracePeriod=2 Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.221027 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.281242 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk4qx\" (UniqueName: \"kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx\") pod \"1ef5440b-a4c3-4e04-8e02-1055391021c7\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.281337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content\") pod \"1ef5440b-a4c3-4e04-8e02-1055391021c7\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.281523 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities\") pod \"1ef5440b-a4c3-4e04-8e02-1055391021c7\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.283080 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities" (OuterVolumeSpecName: "utilities") pod "1ef5440b-a4c3-4e04-8e02-1055391021c7" (UID: "1ef5440b-a4c3-4e04-8e02-1055391021c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.287492 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx" (OuterVolumeSpecName: "kube-api-access-qk4qx") pod "1ef5440b-a4c3-4e04-8e02-1055391021c7" (UID: "1ef5440b-a4c3-4e04-8e02-1055391021c7"). InnerVolumeSpecName "kube-api-access-qk4qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.377836 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ef5440b-a4c3-4e04-8e02-1055391021c7" (UID: "1ef5440b-a4c3-4e04-8e02-1055391021c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.385286 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk4qx\" (UniqueName: \"kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx\") on node \"crc\" DevicePath \"\"" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.385316 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.385328 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.682985 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerID="a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c" exitCode=0 Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.683085 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.684405 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerDied","Data":"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c"} Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.684609 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerDied","Data":"3a13beca094a99c47c09db4ac9ab1071bf5ac21528dbfec02027e2662cc93ceb"} Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.684628 4881 scope.go:117] "RemoveContainer" containerID="a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.709627 4881 scope.go:117] "RemoveContainer" containerID="74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.727805 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.740309 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.753493 4881 scope.go:117] "RemoveContainer" containerID="c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.797564 4881 scope.go:117] "RemoveContainer" containerID="a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c" Jan 21 11:43:55 crc kubenswrapper[4881]: E0121 11:43:55.797917 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c\": container with ID starting with a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c not found: ID does not exist" containerID="a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.797958 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c"} err="failed to get container status \"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c\": rpc error: code = NotFound desc = could not find container \"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c\": container with ID starting with a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c not found: ID does not exist" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.797987 4881 scope.go:117] "RemoveContainer" containerID="74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21" Jan 21 11:43:55 crc kubenswrapper[4881]: E0121 11:43:55.798240 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21\": container with ID starting with 74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21 not found: ID does not exist" containerID="74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.798264 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21"} err="failed to get container status \"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21\": rpc error: code = NotFound desc = could not find container \"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21\": container with ID starting with 74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21 not found: ID does not exist" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.798277 4881 scope.go:117] "RemoveContainer" containerID="c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187" Jan 21 11:43:55 crc kubenswrapper[4881]: E0121 11:43:55.798513 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187\": container with ID starting with c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187 not found: ID does not exist" containerID="c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.798542 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187"} err="failed to get container status \"c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187\": rpc error: code = NotFound desc = could not find container \"c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187\": container with ID starting with c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187 not found: ID does not exist" Jan 21 11:43:57 crc kubenswrapper[4881]: I0121 11:43:57.326810 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" path="/var/lib/kubelet/pods/1ef5440b-a4c3-4e04-8e02-1055391021c7/volumes" Jan 21 11:43:59 crc kubenswrapper[4881]: I0121 11:43:59.852274 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:43:59 crc kubenswrapper[4881]: I0121 11:43:59.852650 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.672214 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:25 crc kubenswrapper[4881]: E0121 11:44:25.673300 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="extract-content" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.673314 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="extract-content" Jan 21 11:44:25 crc kubenswrapper[4881]: E0121 11:44:25.673323 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="extract-utilities" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.673330 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="extract-utilities" Jan 21 11:44:25 crc kubenswrapper[4881]: E0121 11:44:25.673351 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="registry-server" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.673357 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="registry-server" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.673563 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="registry-server" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.675194 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.689943 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.755313 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.755582 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.755682 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.857655 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.857741 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.857878 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.858120 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.858237 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.876664 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.048582 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.308054 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.310853 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.342154 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.374128 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.374341 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv9xw\" (UniqueName: \"kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.375566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.477333 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.477480 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv9xw\" (UniqueName: \"kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.477615 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.477953 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.478245 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.512410 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv9xw\" (UniqueName: \"kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.637939 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.657459 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:27 crc kubenswrapper[4881]: I0121 11:44:27.179844 4881 generic.go:334] "Generic (PLEG): container finished" podID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerID="be36f6ad834ca00233eadc7451dfda0c9752d18ed8499ac6ad57c9815db2567a" exitCode=0 Jan 21 11:44:27 crc kubenswrapper[4881]: I0121 11:44:27.180112 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerDied","Data":"be36f6ad834ca00233eadc7451dfda0c9752d18ed8499ac6ad57c9815db2567a"} Jan 21 11:44:27 crc kubenswrapper[4881]: I0121 11:44:27.180137 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerStarted","Data":"d69f26819e41f24884704883945e98a254587e278d32d1d4a11013c821014e32"} Jan 21 11:44:27 crc kubenswrapper[4881]: I0121 11:44:27.257601 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.399304 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.401919 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.405845 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerStarted","Data":"a3b87112cc4e2f5703453d1593b9d75e4be1102fb918a336d940180bb24d7b53"} Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.408136 4881 generic.go:334] "Generic (PLEG): container finished" podID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerID="336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1" exitCode=0 Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.408972 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerDied","Data":"336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1"} Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.409015 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerStarted","Data":"faddb564d26733e0b7d65cf614390493c38cce9c895f446c1284fb2526e50080"} Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.413613 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.493307 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fknht\" (UniqueName: \"kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.493609 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.493687 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.598149 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.598339 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.598611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fknht\" (UniqueName: \"kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.599674 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.599727 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.618589 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fknht\" (UniqueName: \"kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.936468 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:29 crc kubenswrapper[4881]: I0121 11:44:29.425471 4881 generic.go:334] "Generic (PLEG): container finished" podID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerID="a3b87112cc4e2f5703453d1593b9d75e4be1102fb918a336d940180bb24d7b53" exitCode=0 Jan 21 11:44:29 crc kubenswrapper[4881]: I0121 11:44:29.425549 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerDied","Data":"a3b87112cc4e2f5703453d1593b9d75e4be1102fb918a336d940180bb24d7b53"} Jan 21 11:44:29 crc kubenswrapper[4881]: I0121 11:44:29.488718 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:29 crc kubenswrapper[4881]: I0121 11:44:29.851428 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:44:29 crc kubenswrapper[4881]: I0121 11:44:29.851835 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.439440 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerStarted","Data":"c46a2a4d819c8a32cc07d84e8693331645ce9fdf0d2715fdb9ac2374aedc71ff"} Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.442069 4881 generic.go:334] "Generic (PLEG): container finished" podID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerID="c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51" exitCode=0 Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.442163 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerDied","Data":"c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51"} Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.443598 4881 generic.go:334] "Generic (PLEG): container finished" podID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerID="cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03" exitCode=0 Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.443629 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerDied","Data":"cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03"} Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.443655 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerStarted","Data":"d0d563f9da5b6ef6d9aeb469bdb9a55a96af9c6b6f7a766f8209eb73233aaf4a"} Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.485129 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fdrvq" podStartSLOduration=2.806435882 podStartE2EDuration="5.485101261s" podCreationTimestamp="2026-01-21 11:44:25 +0000 UTC" firstStartedPulling="2026-01-21 11:44:27.181860976 +0000 UTC m=+2854.441817445" lastFinishedPulling="2026-01-21 11:44:29.860526345 +0000 UTC m=+2857.120482824" observedRunningTime="2026-01-21 11:44:30.478866967 +0000 UTC m=+2857.738823436" watchObservedRunningTime="2026-01-21 11:44:30.485101261 +0000 UTC m=+2857.745057730" Jan 21 11:44:31 crc kubenswrapper[4881]: I0121 11:44:31.681702 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerStarted","Data":"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d"} Jan 21 11:44:31 crc kubenswrapper[4881]: I0121 11:44:31.701087 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerStarted","Data":"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d"} Jan 21 11:44:31 crc kubenswrapper[4881]: I0121 11:44:31.709482 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wc4t2" podStartSLOduration=3.277819687 podStartE2EDuration="5.709463844s" podCreationTimestamp="2026-01-21 11:44:26 +0000 UTC" firstStartedPulling="2026-01-21 11:44:28.413036297 +0000 UTC m=+2855.672992766" lastFinishedPulling="2026-01-21 11:44:30.844680454 +0000 UTC m=+2858.104636923" observedRunningTime="2026-01-21 11:44:31.709322131 +0000 UTC m=+2858.969278610" watchObservedRunningTime="2026-01-21 11:44:31.709463844 +0000 UTC m=+2858.969420313" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.049632 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.050259 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.113057 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.658523 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.658622 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.737760 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.778629 4881 generic.go:334] "Generic (PLEG): container finished" podID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerID="da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d" exitCode=0 Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.778726 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerDied","Data":"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d"} Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.858560 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.859387 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:37 crc kubenswrapper[4881]: I0121 11:44:37.791989 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerStarted","Data":"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7"} Jan 21 11:44:37 crc kubenswrapper[4881]: I0121 11:44:37.825392 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-txhzl" podStartSLOduration=3.068594896 podStartE2EDuration="9.825366576s" podCreationTimestamp="2026-01-21 11:44:28 +0000 UTC" firstStartedPulling="2026-01-21 11:44:30.445888276 +0000 UTC m=+2857.705844735" lastFinishedPulling="2026-01-21 11:44:37.202659946 +0000 UTC m=+2864.462616415" observedRunningTime="2026-01-21 11:44:37.813250949 +0000 UTC m=+2865.073207438" watchObservedRunningTime="2026-01-21 11:44:37.825366576 +0000 UTC m=+2865.085323055" Jan 21 11:44:38 crc kubenswrapper[4881]: I0121 11:44:38.061397 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:38 crc kubenswrapper[4881]: I0121 11:44:38.804207 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wc4t2" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="registry-server" containerID="cri-o://d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d" gracePeriod=2 Jan 21 11:44:38 crc kubenswrapper[4881]: I0121 11:44:38.938522 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:38 crc kubenswrapper[4881]: I0121 11:44:38.938891 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.308667 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.486276 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv9xw\" (UniqueName: \"kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw\") pod \"31857b1b-0b5b-40a8-8706-9002ca7c878b\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.486683 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities\") pod \"31857b1b-0b5b-40a8-8706-9002ca7c878b\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.486914 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content\") pod \"31857b1b-0b5b-40a8-8706-9002ca7c878b\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.488309 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities" (OuterVolumeSpecName: "utilities") pod "31857b1b-0b5b-40a8-8706-9002ca7c878b" (UID: "31857b1b-0b5b-40a8-8706-9002ca7c878b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.492480 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw" (OuterVolumeSpecName: "kube-api-access-wv9xw") pod "31857b1b-0b5b-40a8-8706-9002ca7c878b" (UID: "31857b1b-0b5b-40a8-8706-9002ca7c878b"). InnerVolumeSpecName "kube-api-access-wv9xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.508456 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31857b1b-0b5b-40a8-8706-9002ca7c878b" (UID: "31857b1b-0b5b-40a8-8706-9002ca7c878b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.590422 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv9xw\" (UniqueName: \"kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.590515 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.590538 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.819351 4881 generic.go:334] "Generic (PLEG): container finished" podID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerID="d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d" exitCode=0 Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.819419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerDied","Data":"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d"} Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.819492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerDied","Data":"faddb564d26733e0b7d65cf614390493c38cce9c895f446c1284fb2526e50080"} Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.819490 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.819520 4881 scope.go:117] "RemoveContainer" containerID="d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.846078 4881 scope.go:117] "RemoveContainer" containerID="c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.869320 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.879594 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.894237 4881 scope.go:117] "RemoveContainer" containerID="336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.935925 4881 scope.go:117] "RemoveContainer" containerID="d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d" Jan 21 11:44:39 crc kubenswrapper[4881]: E0121 11:44:39.936522 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d\": container with ID starting with d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d not found: ID does not exist" containerID="d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.936593 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d"} err="failed to get container status \"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d\": rpc error: code = NotFound desc = could not find container \"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d\": container with ID starting with d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d not found: ID does not exist" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.936636 4881 scope.go:117] "RemoveContainer" containerID="c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51" Jan 21 11:44:39 crc kubenswrapper[4881]: E0121 11:44:39.937450 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51\": container with ID starting with c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51 not found: ID does not exist" containerID="c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.937505 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51"} err="failed to get container status \"c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51\": rpc error: code = NotFound desc = could not find container \"c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51\": container with ID starting with c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51 not found: ID does not exist" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.937546 4881 scope.go:117] "RemoveContainer" containerID="336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1" Jan 21 11:44:39 crc kubenswrapper[4881]: E0121 11:44:39.938006 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1\": container with ID starting with 336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1 not found: ID does not exist" containerID="336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.938047 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1"} err="failed to get container status \"336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1\": rpc error: code = NotFound desc = could not find container \"336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1\": container with ID starting with 336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1 not found: ID does not exist" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.996355 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-txhzl" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="registry-server" probeResult="failure" output=< Jan 21 11:44:39 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:44:39 crc kubenswrapper[4881]: > Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.462173 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.462693 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fdrvq" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="registry-server" containerID="cri-o://c46a2a4d819c8a32cc07d84e8693331645ce9fdf0d2715fdb9ac2374aedc71ff" gracePeriod=2 Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.832891 4881 generic.go:334] "Generic (PLEG): container finished" podID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerID="c46a2a4d819c8a32cc07d84e8693331645ce9fdf0d2715fdb9ac2374aedc71ff" exitCode=0 Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.832961 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerDied","Data":"c46a2a4d819c8a32cc07d84e8693331645ce9fdf0d2715fdb9ac2374aedc71ff"} Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.833003 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerDied","Data":"d69f26819e41f24884704883945e98a254587e278d32d1d4a11013c821014e32"} Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.833017 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69f26819e41f24884704883945e98a254587e278d32d1d4a11013c821014e32" Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.908723 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.019268 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities\") pod \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.019820 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content\") pod \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.019965 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities" (OuterVolumeSpecName: "utilities") pod "2d42aa8e-f444-4984-a8d7-7a207bf7c53f" (UID: "2d42aa8e-f444-4984-a8d7-7a207bf7c53f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.020279 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4\") pod \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.022494 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.027074 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4" (OuterVolumeSpecName: "kube-api-access-qdtt4") pod "2d42aa8e-f444-4984-a8d7-7a207bf7c53f" (UID: "2d42aa8e-f444-4984-a8d7-7a207bf7c53f"). InnerVolumeSpecName "kube-api-access-qdtt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.086814 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d42aa8e-f444-4984-a8d7-7a207bf7c53f" (UID: "2d42aa8e-f444-4984-a8d7-7a207bf7c53f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.124377 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.124434 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.328265 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" path="/var/lib/kubelet/pods/31857b1b-0b5b-40a8-8706-9002ca7c878b/volumes" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.844376 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.873698 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.883679 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:43 crc kubenswrapper[4881]: I0121 11:44:43.329644 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" path="/var/lib/kubelet/pods/2d42aa8e-f444-4984-a8d7-7a207bf7c53f/volumes" Jan 21 11:44:49 crc kubenswrapper[4881]: I0121 11:44:49.003003 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:49 crc kubenswrapper[4881]: I0121 11:44:49.068445 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:49 crc kubenswrapper[4881]: I0121 11:44:49.244948 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:50 crc kubenswrapper[4881]: I0121 11:44:50.931066 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-txhzl" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="registry-server" containerID="cri-o://e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7" gracePeriod=2 Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.422295 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.464991 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content\") pod \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.569569 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fknht\" (UniqueName: \"kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht\") pod \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.569893 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities\") pod \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.570987 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities" (OuterVolumeSpecName: "utilities") pod "a0e7b801-0b42-4a0f-9d8a-6098f067d197" (UID: "a0e7b801-0b42-4a0f-9d8a-6098f067d197"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.571973 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.576865 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht" (OuterVolumeSpecName: "kube-api-access-fknht") pod "a0e7b801-0b42-4a0f-9d8a-6098f067d197" (UID: "a0e7b801-0b42-4a0f-9d8a-6098f067d197"). InnerVolumeSpecName "kube-api-access-fknht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.610652 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0e7b801-0b42-4a0f-9d8a-6098f067d197" (UID: "a0e7b801-0b42-4a0f-9d8a-6098f067d197"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.673606 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fknht\" (UniqueName: \"kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.673646 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.946596 4881 generic.go:334] "Generic (PLEG): container finished" podID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerID="e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7" exitCode=0 Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.946654 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerDied","Data":"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7"} Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.946688 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerDied","Data":"d0d563f9da5b6ef6d9aeb469bdb9a55a96af9c6b6f7a766f8209eb73233aaf4a"} Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.946709 4881 scope.go:117] "RemoveContainer" containerID="e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.946900 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.980738 4881 scope.go:117] "RemoveContainer" containerID="da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.000946 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.014417 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.024376 4881 scope.go:117] "RemoveContainer" containerID="cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.092386 4881 scope.go:117] "RemoveContainer" containerID="e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7" Jan 21 11:44:52 crc kubenswrapper[4881]: E0121 11:44:52.092998 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7\": container with ID starting with e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7 not found: ID does not exist" containerID="e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.093057 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7"} err="failed to get container status \"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7\": rpc error: code = NotFound desc = could not find container \"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7\": container with ID starting with e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7 not found: ID does not exist" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.093089 4881 scope.go:117] "RemoveContainer" containerID="da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d" Jan 21 11:44:52 crc kubenswrapper[4881]: E0121 11:44:52.093412 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d\": container with ID starting with da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d not found: ID does not exist" containerID="da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.093443 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d"} err="failed to get container status \"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d\": rpc error: code = NotFound desc = could not find container \"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d\": container with ID starting with da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d not found: ID does not exist" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.093459 4881 scope.go:117] "RemoveContainer" containerID="cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03" Jan 21 11:44:52 crc kubenswrapper[4881]: E0121 11:44:52.093700 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03\": container with ID starting with cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03 not found: ID does not exist" containerID="cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.093727 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03"} err="failed to get container status \"cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03\": rpc error: code = NotFound desc = could not find container \"cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03\": container with ID starting with cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03 not found: ID does not exist" Jan 21 11:44:53 crc kubenswrapper[4881]: I0121 11:44:53.327117 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" path="/var/lib/kubelet/pods/a0e7b801-0b42-4a0f-9d8a-6098f067d197/volumes" Jan 21 11:44:59 crc kubenswrapper[4881]: I0121 11:44:59.850837 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:44:59 crc kubenswrapper[4881]: I0121 11:44:59.851467 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:44:59 crc kubenswrapper[4881]: I0121 11:44:59.851521 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:44:59 crc kubenswrapper[4881]: I0121 11:44:59.852689 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:44:59 crc kubenswrapper[4881]: I0121 11:44:59.852755 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c" gracePeriod=600 Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.067878 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c" exitCode=0 Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.067930 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c"} Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.067973 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.155691 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk"] Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156233 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156253 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156276 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156284 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156308 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156317 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156334 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156343 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156358 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156366 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156388 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156396 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156428 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156435 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156452 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156460 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156481 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156489 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156709 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156729 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156763 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.157620 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.160098 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.162577 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.181034 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk"] Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.202191 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.202323 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.202384 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn9xt\" (UniqueName: \"kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.304119 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.305289 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.305566 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.306512 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn9xt\" (UniqueName: \"kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.312602 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.332671 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn9xt\" (UniqueName: \"kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.492038 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:01 crc kubenswrapper[4881]: I0121 11:45:01.000398 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk"] Jan 21 11:45:01 crc kubenswrapper[4881]: W0121 11:45:01.000966 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49387e54_5709_46bd_9f76_cd79369d9abe.slice/crio-ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2 WatchSource:0}: Error finding container ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2: Status 404 returned error can't find the container with id ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2 Jan 21 11:45:01 crc kubenswrapper[4881]: I0121 11:45:01.096507 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" event={"ID":"49387e54-5709-46bd-9f76-cd79369d9abe","Type":"ContainerStarted","Data":"ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2"} Jan 21 11:45:01 crc kubenswrapper[4881]: I0121 11:45:01.107365 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57"} Jan 21 11:45:02 crc kubenswrapper[4881]: I0121 11:45:02.122006 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" event={"ID":"49387e54-5709-46bd-9f76-cd79369d9abe","Type":"ContainerDied","Data":"03feba2a29229654c706a38fc1bff6c4df03df1eca6406a125ce3ee72913286b"} Jan 21 11:45:02 crc kubenswrapper[4881]: I0121 11:45:02.123120 4881 generic.go:334] "Generic (PLEG): container finished" podID="49387e54-5709-46bd-9f76-cd79369d9abe" containerID="03feba2a29229654c706a38fc1bff6c4df03df1eca6406a125ce3ee72913286b" exitCode=0 Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.166277 4881 generic.go:334] "Generic (PLEG): container finished" podID="38ac646b-177b-488d-853b-e04b22f267a4" containerID="2e34d3926c62f8cffffc796ec975008bf3545972abcc913f207930e4451b062e" exitCode=0 Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.166369 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" event={"ID":"38ac646b-177b-488d-853b-e04b22f267a4","Type":"ContainerDied","Data":"2e34d3926c62f8cffffc796ec975008bf3545972abcc913f207930e4451b062e"} Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.496584 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.698424 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn9xt\" (UniqueName: \"kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt\") pod \"49387e54-5709-46bd-9f76-cd79369d9abe\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.698705 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume\") pod \"49387e54-5709-46bd-9f76-cd79369d9abe\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.699406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume" (OuterVolumeSpecName: "config-volume") pod "49387e54-5709-46bd-9f76-cd79369d9abe" (UID: "49387e54-5709-46bd-9f76-cd79369d9abe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.699477 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume\") pod \"49387e54-5709-46bd-9f76-cd79369d9abe\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.700209 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.704345 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt" (OuterVolumeSpecName: "kube-api-access-sn9xt") pod "49387e54-5709-46bd-9f76-cd79369d9abe" (UID: "49387e54-5709-46bd-9f76-cd79369d9abe"). InnerVolumeSpecName "kube-api-access-sn9xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.704770 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "49387e54-5709-46bd-9f76-cd79369d9abe" (UID: "49387e54-5709-46bd-9f76-cd79369d9abe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.801262 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.801294 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn9xt\" (UniqueName: \"kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.184163 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.184186 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" event={"ID":"49387e54-5709-46bd-9f76-cd79369d9abe","Type":"ContainerDied","Data":"ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2"} Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.184223 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.585042 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb"] Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.600114 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb"] Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.718508 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.832418 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0\") pod \"38ac646b-177b-488d-853b-e04b22f267a4\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.832538 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle\") pod \"38ac646b-177b-488d-853b-e04b22f267a4\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.832599 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptwlv\" (UniqueName: \"kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv\") pod \"38ac646b-177b-488d-853b-e04b22f267a4\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.832618 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory\") pod \"38ac646b-177b-488d-853b-e04b22f267a4\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.832666 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam\") pod \"38ac646b-177b-488d-853b-e04b22f267a4\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.838451 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "38ac646b-177b-488d-853b-e04b22f267a4" (UID: "38ac646b-177b-488d-853b-e04b22f267a4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.839189 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv" (OuterVolumeSpecName: "kube-api-access-ptwlv") pod "38ac646b-177b-488d-853b-e04b22f267a4" (UID: "38ac646b-177b-488d-853b-e04b22f267a4"). InnerVolumeSpecName "kube-api-access-ptwlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.863852 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory" (OuterVolumeSpecName: "inventory") pod "38ac646b-177b-488d-853b-e04b22f267a4" (UID: "38ac646b-177b-488d-853b-e04b22f267a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.864311 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38ac646b-177b-488d-853b-e04b22f267a4" (UID: "38ac646b-177b-488d-853b-e04b22f267a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.875564 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "38ac646b-177b-488d-853b-e04b22f267a4" (UID: "38ac646b-177b-488d-853b-e04b22f267a4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.935709 4881 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.935747 4881 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.935763 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptwlv\" (UniqueName: \"kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.935775 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.935808 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.199751 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" event={"ID":"38ac646b-177b-488d-853b-e04b22f267a4","Type":"ContainerDied","Data":"dc79678ab6ba1932de7e4e05e7465b949910c18ea04deeee070bef7c91f2f1e4"} Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.199839 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc79678ab6ba1932de7e4e05e7465b949910c18ea04deeee070bef7c91f2f1e4" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.199886 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.275027 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m"] Jan 21 11:45:05 crc kubenswrapper[4881]: E0121 11:45:05.275504 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ac646b-177b-488d-853b-e04b22f267a4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.275531 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ac646b-177b-488d-853b-e04b22f267a4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 11:45:05 crc kubenswrapper[4881]: E0121 11:45:05.275591 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49387e54-5709-46bd-9f76-cd79369d9abe" containerName="collect-profiles" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.275600 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="49387e54-5709-46bd-9f76-cd79369d9abe" containerName="collect-profiles" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.275948 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ac646b-177b-488d-853b-e04b22f267a4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.276063 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="49387e54-5709-46bd-9f76-cd79369d9abe" containerName="collect-profiles" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.277282 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281405 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281620 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281669 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281507 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281625 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281859 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281532 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.303544 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m"] Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.332861 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c09a3a-6389-443c-888b-fe83557dd508" path="/var/lib/kubelet/pods/65c09a3a-6389-443c-888b-fe83557dd508/volumes" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445269 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445368 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445435 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445461 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445500 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445532 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfq5\" (UniqueName: \"kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445565 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445605 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.548142 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.548829 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.548954 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549071 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549180 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549282 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfq5\" (UniqueName: \"kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549562 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549735 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.551090 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.554218 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.554936 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.555167 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.555498 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.555503 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.556767 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.562229 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.573513 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfq5\" (UniqueName: \"kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.611084 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:06 crc kubenswrapper[4881]: I0121 11:45:06.205070 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m"] Jan 21 11:45:07 crc kubenswrapper[4881]: I0121 11:45:07.245954 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" event={"ID":"bfc5a115-aedb-4364-8b0d-59b8379346cb","Type":"ContainerStarted","Data":"88686bced315f81283d95e59e4f2403c8b2d8fed5959e3b75d3616a3313db4e6"} Jan 21 11:45:07 crc kubenswrapper[4881]: I0121 11:45:07.247388 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" event={"ID":"bfc5a115-aedb-4364-8b0d-59b8379346cb","Type":"ContainerStarted","Data":"e961a6307da8e32005ab966a01a4319c67608126400b0a7e33b34ae83eadc3c1"} Jan 21 11:45:07 crc kubenswrapper[4881]: I0121 11:45:07.271805 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" podStartSLOduration=1.812258646 podStartE2EDuration="2.271761279s" podCreationTimestamp="2026-01-21 11:45:05 +0000 UTC" firstStartedPulling="2026-01-21 11:45:06.21449066 +0000 UTC m=+2893.474447129" lastFinishedPulling="2026-01-21 11:45:06.673993293 +0000 UTC m=+2893.933949762" observedRunningTime="2026-01-21 11:45:07.263383103 +0000 UTC m=+2894.523339572" watchObservedRunningTime="2026-01-21 11:45:07.271761279 +0000 UTC m=+2894.531717758" Jan 21 11:45:31 crc kubenswrapper[4881]: I0121 11:45:31.761204 4881 scope.go:117] "RemoveContainer" containerID="506baee9263f2e28d3f1ef1ef645da28ead83f7c212d5255ebc44d13c43d15f7" Jan 21 11:47:29 crc kubenswrapper[4881]: I0121 11:47:29.851540 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:47:29 crc kubenswrapper[4881]: I0121 11:47:29.852204 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:47:48 crc kubenswrapper[4881]: I0121 11:47:48.204105 4881 generic.go:334] "Generic (PLEG): container finished" podID="bfc5a115-aedb-4364-8b0d-59b8379346cb" containerID="88686bced315f81283d95e59e4f2403c8b2d8fed5959e3b75d3616a3313db4e6" exitCode=0 Jan 21 11:47:48 crc kubenswrapper[4881]: I0121 11:47:48.204192 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" event={"ID":"bfc5a115-aedb-4364-8b0d-59b8379346cb","Type":"ContainerDied","Data":"88686bced315f81283d95e59e4f2403c8b2d8fed5959e3b75d3616a3313db4e6"} Jan 21 11:47:49 crc kubenswrapper[4881]: I0121 11:47:49.992198 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.084481 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.084539 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.084639 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.085441 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbfq5\" (UniqueName: \"kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.086213 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.086270 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.086376 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.086414 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.086507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.091868 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.093833 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5" (OuterVolumeSpecName: "kube-api-access-hbfq5") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "kube-api-access-hbfq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.115127 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.120218 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.120326 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory" (OuterVolumeSpecName: "inventory") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.121568 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.123731 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.130959 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.132433 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189035 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189077 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbfq5\" (UniqueName: \"kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189086 4881 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189095 4881 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189134 4881 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189145 4881 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189154 4881 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189164 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189175 4881 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.527278 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" event={"ID":"bfc5a115-aedb-4364-8b0d-59b8379346cb","Type":"ContainerDied","Data":"e961a6307da8e32005ab966a01a4319c67608126400b0a7e33b34ae83eadc3c1"} Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.527571 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e961a6307da8e32005ab966a01a4319c67608126400b0a7e33b34ae83eadc3c1" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.527372 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.613972 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr"] Jan 21 11:47:50 crc kubenswrapper[4881]: E0121 11:47:50.614578 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc5a115-aedb-4364-8b0d-59b8379346cb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.614603 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc5a115-aedb-4364-8b0d-59b8379346cb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.614940 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc5a115-aedb-4364-8b0d-59b8379346cb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.616236 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.620033 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.620701 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.620772 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.622633 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr"] Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.624722 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.624987 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.802099 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kv7\" (UniqueName: \"kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.802602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.802969 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.803315 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.803587 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.803945 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.804130 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906203 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906265 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906313 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kv7\" (UniqueName: \"kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906355 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906458 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906523 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906545 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.911377 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.912256 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.914075 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.914737 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.915130 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.916258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.935799 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kv7\" (UniqueName: \"kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:51 crc kubenswrapper[4881]: I0121 11:47:51.233325 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:51 crc kubenswrapper[4881]: I0121 11:47:51.936681 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr"] Jan 21 11:47:52 crc kubenswrapper[4881]: I0121 11:47:52.586521 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" event={"ID":"2f9f4763-a2f6-4558-82fa-be718012fc12","Type":"ContainerStarted","Data":"2bd3402b9e27d9638a3014022bc0917662606afb76306548d94c2dbe1498c53a"} Jan 21 11:47:53 crc kubenswrapper[4881]: I0121 11:47:53.597048 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" event={"ID":"2f9f4763-a2f6-4558-82fa-be718012fc12","Type":"ContainerStarted","Data":"d3be7960d0b27110197d7181b46b708d56c6c1ea3312bb674678bb754bbcd27d"} Jan 21 11:47:53 crc kubenswrapper[4881]: I0121 11:47:53.621426 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" podStartSLOduration=3.083341475 podStartE2EDuration="3.621400031s" podCreationTimestamp="2026-01-21 11:47:50 +0000 UTC" firstStartedPulling="2026-01-21 11:47:51.94301643 +0000 UTC m=+3059.202972899" lastFinishedPulling="2026-01-21 11:47:52.481074966 +0000 UTC m=+3059.741031455" observedRunningTime="2026-01-21 11:47:53.620234633 +0000 UTC m=+3060.880191112" watchObservedRunningTime="2026-01-21 11:47:53.621400031 +0000 UTC m=+3060.881356510" Jan 21 11:47:59 crc kubenswrapper[4881]: I0121 11:47:59.850866 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:47:59 crc kubenswrapper[4881]: I0121 11:47:59.851550 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:48:29 crc kubenswrapper[4881]: I0121 11:48:29.850963 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:48:29 crc kubenswrapper[4881]: I0121 11:48:29.851527 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:48:29 crc kubenswrapper[4881]: I0121 11:48:29.851581 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:48:29 crc kubenswrapper[4881]: I0121 11:48:29.852494 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:48:29 crc kubenswrapper[4881]: I0121 11:48:29.852564 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" gracePeriod=600 Jan 21 11:48:30 crc kubenswrapper[4881]: E0121 11:48:30.519616 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:48:30 crc kubenswrapper[4881]: I0121 11:48:30.987164 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" exitCode=0 Jan 21 11:48:30 crc kubenswrapper[4881]: I0121 11:48:30.987259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57"} Jan 21 11:48:30 crc kubenswrapper[4881]: I0121 11:48:30.987377 4881 scope.go:117] "RemoveContainer" containerID="40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c" Jan 21 11:48:30 crc kubenswrapper[4881]: I0121 11:48:30.988487 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:48:30 crc kubenswrapper[4881]: E0121 11:48:30.989287 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:48:44 crc kubenswrapper[4881]: I0121 11:48:44.311793 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:48:44 crc kubenswrapper[4881]: E0121 11:48:44.312617 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:48:56 crc kubenswrapper[4881]: I0121 11:48:56.311298 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:48:56 crc kubenswrapper[4881]: E0121 11:48:56.312705 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:49:09 crc kubenswrapper[4881]: I0121 11:49:09.312887 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:49:09 crc kubenswrapper[4881]: E0121 11:49:09.313911 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:49:20 crc kubenswrapper[4881]: I0121 11:49:20.312389 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:49:20 crc kubenswrapper[4881]: E0121 11:49:20.313770 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:49:32 crc kubenswrapper[4881]: I0121 11:49:32.311986 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:49:32 crc kubenswrapper[4881]: E0121 11:49:32.312835 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:49:45 crc kubenswrapper[4881]: I0121 11:49:45.311240 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:49:45 crc kubenswrapper[4881]: E0121 11:49:45.312404 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:49:59 crc kubenswrapper[4881]: I0121 11:49:59.311168 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:49:59 crc kubenswrapper[4881]: E0121 11:49:59.312525 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:50:11 crc kubenswrapper[4881]: I0121 11:50:11.311296 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:50:11 crc kubenswrapper[4881]: E0121 11:50:11.312315 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:50:18 crc kubenswrapper[4881]: I0121 11:50:18.158200 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f9f4763-a2f6-4558-82fa-be718012fc12" containerID="d3be7960d0b27110197d7181b46b708d56c6c1ea3312bb674678bb754bbcd27d" exitCode=0 Jan 21 11:50:18 crc kubenswrapper[4881]: I0121 11:50:18.158328 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" event={"ID":"2f9f4763-a2f6-4558-82fa-be718012fc12","Type":"ContainerDied","Data":"d3be7960d0b27110197d7181b46b708d56c6c1ea3312bb674678bb754bbcd27d"} Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.671773 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.849764 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.849882 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.849941 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.850031 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.850117 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.850157 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kv7\" (UniqueName: \"kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.850222 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.855976 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.857671 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7" (OuterVolumeSpecName: "kube-api-access-l2kv7") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "kube-api-access-l2kv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.884053 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.891904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory" (OuterVolumeSpecName: "inventory") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.896346 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.898390 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.910354 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953774 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953866 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953882 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953896 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953909 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953933 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2kv7\" (UniqueName: \"kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953950 4881 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:20 crc kubenswrapper[4881]: I0121 11:50:20.182077 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" event={"ID":"2f9f4763-a2f6-4558-82fa-be718012fc12","Type":"ContainerDied","Data":"2bd3402b9e27d9638a3014022bc0917662606afb76306548d94c2dbe1498c53a"} Jan 21 11:50:20 crc kubenswrapper[4881]: I0121 11:50:20.182171 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bd3402b9e27d9638a3014022bc0917662606afb76306548d94c2dbe1498c53a" Jan 21 11:50:20 crc kubenswrapper[4881]: I0121 11:50:20.182191 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:50:22 crc kubenswrapper[4881]: I0121 11:50:22.311640 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:50:22 crc kubenswrapper[4881]: E0121 11:50:22.312285 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:50:31 crc kubenswrapper[4881]: I0121 11:50:31.964726 4881 scope.go:117] "RemoveContainer" containerID="a3b87112cc4e2f5703453d1593b9d75e4be1102fb918a336d940180bb24d7b53" Jan 21 11:50:32 crc kubenswrapper[4881]: I0121 11:50:32.054825 4881 scope.go:117] "RemoveContainer" containerID="c46a2a4d819c8a32cc07d84e8693331645ce9fdf0d2715fdb9ac2374aedc71ff" Jan 21 11:50:32 crc kubenswrapper[4881]: I0121 11:50:32.107318 4881 scope.go:117] "RemoveContainer" containerID="be36f6ad834ca00233eadc7451dfda0c9752d18ed8499ac6ad57c9815db2567a" Jan 21 11:50:37 crc kubenswrapper[4881]: I0121 11:50:37.312202 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:50:37 crc kubenswrapper[4881]: E0121 11:50:37.313571 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:50:52 crc kubenswrapper[4881]: I0121 11:50:52.311339 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:50:52 crc kubenswrapper[4881]: E0121 11:50:52.312020 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.748843 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 21 11:51:00 crc kubenswrapper[4881]: E0121 11:51:00.749696 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9f4763-a2f6-4558-82fa-be718012fc12" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.749710 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9f4763-a2f6-4558-82fa-be718012fc12" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.749938 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f9f4763-a2f6-4558-82fa-be718012fc12" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.751046 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.753272 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.776491 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.144590 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-dev\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.144827 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.144906 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-run\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.144957 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.145904 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjkhf\" (UniqueName: \"kubernetes.io/projected/306aceba-6a20-4b47-a19a-fb193a27e2bd-kube-api-access-vjkhf\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.145981 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146049 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-sys\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146071 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-lib-modules\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146134 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146203 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146286 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146482 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146530 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146670 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-scripts\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.186137 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.189874 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.193952 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.199759 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.210438 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.212762 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.214581 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.227825 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249547 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249603 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-scripts\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249625 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249642 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-sys\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249687 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249710 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249731 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249750 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249815 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-dev\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249838 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249862 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249892 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-dev\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249943 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249978 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250039 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250096 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250143 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-run\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250166 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250221 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-run\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250273 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250304 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250334 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250350 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250379 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjkhf\" (UniqueName: \"kubernetes.io/projected/306aceba-6a20-4b47-a19a-fb193a27e2bd-kube-api-access-vjkhf\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250398 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250431 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250449 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-dev\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250467 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-lib-modules\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250483 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-sys\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250501 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250516 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250534 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qv6v\" (UniqueName: \"kubernetes.io/projected/8c912ca5-a82b-4083-8579-f0f6f506eebb-kube-api-access-7qv6v\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250553 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250593 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250607 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250656 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250702 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pf44\" (UniqueName: \"kubernetes.io/projected/112f53db-2aaa-4a3d-bc89-fd86952639ab-kube-api-access-4pf44\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250733 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250463 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-run\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250768 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250813 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-lib-modules\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250816 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250931 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250943 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250965 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250986 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-sys\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.251128 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.251148 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.251193 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.252264 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.257185 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.257214 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.268208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjkhf\" (UniqueName: \"kubernetes.io/projected/306aceba-6a20-4b47-a19a-fb193a27e2bd-kube-api-access-vjkhf\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.269801 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-scripts\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.289208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.352889 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.352933 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.353082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-sys\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354002 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.353448 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354076 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.353661 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-sys\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354145 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354211 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354232 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354273 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354291 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354306 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354333 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354354 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354375 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354408 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354451 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354469 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-run\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354491 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354526 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354559 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354575 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354616 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354644 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-dev\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354689 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354707 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354724 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qv6v\" (UniqueName: \"kubernetes.io/projected/8c912ca5-a82b-4083-8579-f0f6f506eebb-kube-api-access-7qv6v\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354754 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354778 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pf44\" (UniqueName: \"kubernetes.io/projected/112f53db-2aaa-4a3d-bc89-fd86952639ab-kube-api-access-4pf44\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354881 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354911 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354954 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355022 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355745 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355762 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355798 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355924 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356128 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356158 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356178 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356538 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-dev\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356702 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356759 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.357563 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-run\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.357838 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.357885 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.358911 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.360304 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.360449 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.360597 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.361317 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.361505 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.362164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.362484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.362487 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.372002 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qv6v\" (UniqueName: \"kubernetes.io/projected/8c912ca5-a82b-4083-8579-f0f6f506eebb-kube-api-access-7qv6v\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.372480 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.381328 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pf44\" (UniqueName: \"kubernetes.io/projected/112f53db-2aaa-4a3d-bc89-fd86952639ab-kube-api-access-4pf44\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.525922 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.535815 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:02 crc kubenswrapper[4881]: I0121 11:51:02.036880 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 11:51:02 crc kubenswrapper[4881]: I0121 11:51:02.040613 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:51:02 crc kubenswrapper[4881]: I0121 11:51:02.175093 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"306aceba-6a20-4b47-a19a-fb193a27e2bd","Type":"ContainerStarted","Data":"88e62150086ddc64733a5fbe0b1661bba3ff3d3940cf9e954f6c44084e9add0d"} Jan 21 11:51:02 crc kubenswrapper[4881]: I0121 11:51:02.245069 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 21 11:51:03 crc kubenswrapper[4881]: I0121 11:51:03.215069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"8c912ca5-a82b-4083-8579-f0f6f506eebb","Type":"ContainerStarted","Data":"c64419b0f7588b60b091afdc05906d8f4c63760c6fa6bf5710b9012941fc09e2"} Jan 21 11:51:03 crc kubenswrapper[4881]: I0121 11:51:03.335057 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.226911 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"8c912ca5-a82b-4083-8579-f0f6f506eebb","Type":"ContainerStarted","Data":"7e23f895a3e3240ba0d64f3af69bd387fa9627ecbd8f77e31aedca0cbe2abfd1"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.227450 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"8c912ca5-a82b-4083-8579-f0f6f506eebb","Type":"ContainerStarted","Data":"82b1ddd2a7192aecee0cb6c979adac6a1822ef5362bcd3ed72cefa8f4fb43255"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.229378 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"112f53db-2aaa-4a3d-bc89-fd86952639ab","Type":"ContainerStarted","Data":"c242f7d24b426e3c7f6e8f921fcccde19ffb9e0c2de9853a8b6dab2745aecbe9"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.229447 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"112f53db-2aaa-4a3d-bc89-fd86952639ab","Type":"ContainerStarted","Data":"8e4078cf91b68cbdd344bf8cec14191a784934146e78034db355bd0ce3c45085"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.229462 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"112f53db-2aaa-4a3d-bc89-fd86952639ab","Type":"ContainerStarted","Data":"0f9c7c8e501d39bc5e4aeb520b3757995e1277aec6df6917f4ddf1ff65a1a031"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.231647 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"306aceba-6a20-4b47-a19a-fb193a27e2bd","Type":"ContainerStarted","Data":"376a801a5f723a90aca788b2db2d06aceabf31e9141502d8dcbce2528567a939"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.231686 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"306aceba-6a20-4b47-a19a-fb193a27e2bd","Type":"ContainerStarted","Data":"bb24a1186fc46593e1f17b841ada4b3372147ce1e352d15abdbc3cb14e043eb9"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.275569 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.327771643 podStartE2EDuration="3.275553197s" podCreationTimestamp="2026-01-21 11:51:01 +0000 UTC" firstStartedPulling="2026-01-21 11:51:02.285290028 +0000 UTC m=+3249.545246497" lastFinishedPulling="2026-01-21 11:51:03.233071582 +0000 UTC m=+3250.493028051" observedRunningTime="2026-01-21 11:51:04.270644407 +0000 UTC m=+3251.530600876" watchObservedRunningTime="2026-01-21 11:51:04.275553197 +0000 UTC m=+3251.535509666" Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.307748 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=3.30773298 podStartE2EDuration="3.30773298s" podCreationTimestamp="2026-01-21 11:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:51:04.30406157 +0000 UTC m=+3251.564018029" watchObservedRunningTime="2026-01-21 11:51:04.30773298 +0000 UTC m=+3251.567689449" Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.341571 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.045509104 podStartE2EDuration="4.341551422s" podCreationTimestamp="2026-01-21 11:51:00 +0000 UTC" firstStartedPulling="2026-01-21 11:51:02.040334937 +0000 UTC m=+3249.300291406" lastFinishedPulling="2026-01-21 11:51:02.336377255 +0000 UTC m=+3249.596333724" observedRunningTime="2026-01-21 11:51:04.330777408 +0000 UTC m=+3251.590733877" watchObservedRunningTime="2026-01-21 11:51:04.341551422 +0000 UTC m=+3251.601507881" Jan 21 11:51:06 crc kubenswrapper[4881]: I0121 11:51:06.373608 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 21 11:51:06 crc kubenswrapper[4881]: I0121 11:51:06.526162 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:06 crc kubenswrapper[4881]: I0121 11:51:06.537112 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:07 crc kubenswrapper[4881]: I0121 11:51:07.447865 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:51:07 crc kubenswrapper[4881]: E0121 11:51:07.448355 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:51:11 crc kubenswrapper[4881]: I0121 11:51:11.605604 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 21 11:51:11 crc kubenswrapper[4881]: I0121 11:51:11.709742 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:11 crc kubenswrapper[4881]: I0121 11:51:11.778242 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:20 crc kubenswrapper[4881]: I0121 11:51:20.311577 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:51:20 crc kubenswrapper[4881]: E0121 11:51:20.312950 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:51:35 crc kubenswrapper[4881]: I0121 11:51:35.310872 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:51:35 crc kubenswrapper[4881]: E0121 11:51:35.311729 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:51:50 crc kubenswrapper[4881]: I0121 11:51:50.311295 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:51:50 crc kubenswrapper[4881]: E0121 11:51:50.313141 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:52:05 crc kubenswrapper[4881]: I0121 11:52:05.311367 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:52:05 crc kubenswrapper[4881]: E0121 11:52:05.312394 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:52:07 crc kubenswrapper[4881]: I0121 11:52:07.447083 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:07 crc kubenswrapper[4881]: I0121 11:52:07.447757 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="prometheus" containerID="cri-o://8325ef681bcdbc9f213b1b50d5070cda09f322843e0e7d334a000739ac240fa4" gracePeriod=600 Jan 21 11:52:07 crc kubenswrapper[4881]: I0121 11:52:07.447937 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="thanos-sidecar" containerID="cri-o://c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01" gracePeriod=600 Jan 21 11:52:07 crc kubenswrapper[4881]: I0121 11:52:07.448004 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="config-reloader" containerID="cri-o://ef9d78c9c5e22c01f5e8274cad9637d465377b5339dc20fcbf444a1190841bcb" gracePeriod=600 Jan 21 11:52:07 crc kubenswrapper[4881]: E0121 11:52:07.602934 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5ae3126_d6d3_4268_8e35_e216eabcc6f4.slice/crio-conmon-c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5ae3126_d6d3_4268_8e35_e216eabcc6f4.slice/crio-c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.443267 4881 generic.go:334] "Generic (PLEG): container finished" podID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerID="c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01" exitCode=0 Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.443753 4881 generic.go:334] "Generic (PLEG): container finished" podID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerID="ef9d78c9c5e22c01f5e8274cad9637d465377b5339dc20fcbf444a1190841bcb" exitCode=0 Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.443763 4881 generic.go:334] "Generic (PLEG): container finished" podID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerID="8325ef681bcdbc9f213b1b50d5070cda09f322843e0e7d334a000739ac240fa4" exitCode=0 Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.444019 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerDied","Data":"c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01"} Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.444053 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerDied","Data":"ef9d78c9c5e22c01f5e8274cad9637d465377b5339dc20fcbf444a1190841bcb"} Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.444064 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerDied","Data":"8325ef681bcdbc9f213b1b50d5070cda09f322843e0e7d334a000739ac240fa4"} Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.444075 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerDied","Data":"044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c"} Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.444084 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.518559 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.693760 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.693973 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694004 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9ng7\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694030 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694066 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694315 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694653 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694904 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695021 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695086 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695129 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695163 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695200 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695222 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695252 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695676 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695704 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.698569 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.701541 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7" (OuterVolumeSpecName: "kube-api-access-d9ng7") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "kube-api-access-d9ng7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.701578 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.702088 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.703317 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.704327 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config" (OuterVolumeSpecName: "config") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.705049 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.705704 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.723931 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out" (OuterVolumeSpecName: "config-out") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.782760 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798306 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798362 4881 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798381 4881 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798403 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9ng7\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798423 4881 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798476 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") on node \"crc\" " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798495 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798514 4881 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798533 4881 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798550 4881 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.822419 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config" (OuterVolumeSpecName: "web-config") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.850859 4881 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.851283 4881 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a") on node "crc" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.900933 4881 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.901200 4881 reconciler_common.go:293] "Volume detached for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.461814 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.496833 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.505457 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.539489 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:09 crc kubenswrapper[4881]: E0121 11:52:09.541117 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="init-config-reloader" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.541239 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="init-config-reloader" Jan 21 11:52:09 crc kubenswrapper[4881]: E0121 11:52:09.541342 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="prometheus" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.541432 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="prometheus" Jan 21 11:52:09 crc kubenswrapper[4881]: E0121 11:52:09.541502 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="config-reloader" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.541558 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="config-reloader" Jan 21 11:52:09 crc kubenswrapper[4881]: E0121 11:52:09.541648 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="thanos-sidecar" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.541837 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="thanos-sidecar" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.542186 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="prometheus" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.542304 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="thanos-sidecar" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.542374 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="config-reloader" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.544649 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.547390 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.547434 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.547774 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.547989 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.548360 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.548583 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jwvdx" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.550940 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.555031 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.569860 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904228 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904308 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbc92\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-kube-api-access-nbc92\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904343 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904388 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904435 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904454 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a412b1e-29ac-4420-920d-6054e2c03d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904478 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904495 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904514 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904555 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904581 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904626 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904651 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.006828 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.006898 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.006956 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.006993 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007219 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbc92\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-kube-api-access-nbc92\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007257 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007372 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007399 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a412b1e-29ac-4420-920d-6054e2c03d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007430 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007451 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007475 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.008758 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.012009 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.012186 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.015805 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.016853 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.017002 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.019495 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.019509 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.020114 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a412b1e-29ac-4420-920d-6054e2c03d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.020236 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.028367 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.028400 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.028425 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c91253029fdcc57c7bcc13c4ee1dc503079fe71761fa62e5d04837e0b8b075e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.031123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbc92\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-kube-api-access-nbc92\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.072249 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.163205 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.652120 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:11 crc kubenswrapper[4881]: I0121 11:52:11.332155 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" path="/var/lib/kubelet/pods/c5ae3126-d6d3-4268-8e35-e216eabcc6f4/volumes" Jan 21 11:52:11 crc kubenswrapper[4881]: I0121 11:52:11.487904 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerStarted","Data":"3a365c1f9c9183115a8cf53d204723967ceea5d6d7c2491eaa0e86e7626daa3d"} Jan 21 11:52:11 crc kubenswrapper[4881]: I0121 11:52:11.511561 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.136:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 11:52:15 crc kubenswrapper[4881]: I0121 11:52:15.539128 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerStarted","Data":"6f125e01fd517390d85ac08a2c5ea9d2899034078c9238efd78e6ffb03996ce4"} Jan 21 11:52:20 crc kubenswrapper[4881]: I0121 11:52:20.311777 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:52:20 crc kubenswrapper[4881]: E0121 11:52:20.312668 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:52:25 crc kubenswrapper[4881]: I0121 11:52:25.656373 4881 generic.go:334] "Generic (PLEG): container finished" podID="4a412b1e-29ac-4420-920d-6054e2c03d53" containerID="6f125e01fd517390d85ac08a2c5ea9d2899034078c9238efd78e6ffb03996ce4" exitCode=0 Jan 21 11:52:25 crc kubenswrapper[4881]: I0121 11:52:25.656523 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerDied","Data":"6f125e01fd517390d85ac08a2c5ea9d2899034078c9238efd78e6ffb03996ce4"} Jan 21 11:52:26 crc kubenswrapper[4881]: I0121 11:52:26.668411 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerStarted","Data":"d69a8d1f17d30ed5c57b5c6613211ee457c89edce7c7ab4c21c2299ff634238c"} Jan 21 11:52:30 crc kubenswrapper[4881]: I0121 11:52:30.716545 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerStarted","Data":"b322f6587d4d6e6ca2aab444b426a0b2cf8db4e66e633a9150fb6848f18052d2"} Jan 21 11:52:30 crc kubenswrapper[4881]: I0121 11:52:30.717250 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerStarted","Data":"29ab15283cc0a73140495752a9403292f011cad93a7eba66fb212107581801d4"} Jan 21 11:52:30 crc kubenswrapper[4881]: I0121 11:52:30.758163 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.758143825 podStartE2EDuration="21.758143825s" podCreationTimestamp="2026-01-21 11:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:52:30.75385232 +0000 UTC m=+3338.013808819" watchObservedRunningTime="2026-01-21 11:52:30.758143825 +0000 UTC m=+3338.018100294" Jan 21 11:52:32 crc kubenswrapper[4881]: I0121 11:52:32.216405 4881 scope.go:117] "RemoveContainer" containerID="8325ef681bcdbc9f213b1b50d5070cda09f322843e0e7d334a000739ac240fa4" Jan 21 11:52:32 crc kubenswrapper[4881]: I0121 11:52:32.242893 4881 scope.go:117] "RemoveContainer" containerID="a35359d5b5faf07c0a8496b05737dc67dd3207c714c5cd8b7b98eda3d6b21eb4" Jan 21 11:52:32 crc kubenswrapper[4881]: I0121 11:52:32.272622 4881 scope.go:117] "RemoveContainer" containerID="ef9d78c9c5e22c01f5e8274cad9637d465377b5339dc20fcbf444a1190841bcb" Jan 21 11:52:32 crc kubenswrapper[4881]: I0121 11:52:32.313684 4881 scope.go:117] "RemoveContainer" containerID="c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01" Jan 21 11:52:34 crc kubenswrapper[4881]: I0121 11:52:34.311061 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:52:34 crc kubenswrapper[4881]: E0121 11:52:34.311748 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:52:35 crc kubenswrapper[4881]: I0121 11:52:35.163614 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:40 crc kubenswrapper[4881]: I0121 11:52:40.164529 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:40 crc kubenswrapper[4881]: I0121 11:52:40.173070 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:40 crc kubenswrapper[4881]: I0121 11:52:40.237444 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:46 crc kubenswrapper[4881]: I0121 11:52:46.310644 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:52:46 crc kubenswrapper[4881]: E0121 11:52:46.311419 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.668914 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.671138 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.673342 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.673342 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.673604 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.676007 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-sp5k2" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.685952 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740488 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740586 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740632 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740725 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740756 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740855 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740894 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-config-data\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740992 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4hn\" (UniqueName: \"kubernetes.io/projected/b482979e-7a9e-4b89-846c-f50400adcf1b-kube-api-access-nx4hn\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.741063 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843133 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843198 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843225 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843261 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843276 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843319 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843345 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-config-data\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx4hn\" (UniqueName: \"kubernetes.io/projected/b482979e-7a9e-4b89-846c-f50400adcf1b-kube-api-access-nx4hn\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843411 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843549 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843978 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.844407 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.845311 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.845900 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-config-data\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.851226 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.851469 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.855308 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.866490 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx4hn\" (UniqueName: \"kubernetes.io/projected/b482979e-7a9e-4b89-846c-f50400adcf1b-kube-api-access-nx4hn\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.892481 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:53 crc kubenswrapper[4881]: I0121 11:52:53.001631 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 11:52:53 crc kubenswrapper[4881]: W0121 11:52:53.557648 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb482979e_7a9e_4b89_846c_f50400adcf1b.slice/crio-e7f94caf9fb5ebfb061dd9ba5ac5d3214a56c129294a84a3c16da495e4592e03 WatchSource:0}: Error finding container e7f94caf9fb5ebfb061dd9ba5ac5d3214a56c129294a84a3c16da495e4592e03: Status 404 returned error can't find the container with id e7f94caf9fb5ebfb061dd9ba5ac5d3214a56c129294a84a3c16da495e4592e03 Jan 21 11:52:53 crc kubenswrapper[4881]: I0121 11:52:53.560445 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 11:52:53 crc kubenswrapper[4881]: I0121 11:52:53.747888 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b482979e-7a9e-4b89-846c-f50400adcf1b","Type":"ContainerStarted","Data":"e7f94caf9fb5ebfb061dd9ba5ac5d3214a56c129294a84a3c16da495e4592e03"} Jan 21 11:52:57 crc kubenswrapper[4881]: I0121 11:52:57.311452 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:52:57 crc kubenswrapper[4881]: E0121 11:52:57.312293 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:53:03 crc kubenswrapper[4881]: I0121 11:53:03.647741 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 11:53:04 crc kubenswrapper[4881]: I0121 11:53:04.861306 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b482979e-7a9e-4b89-846c-f50400adcf1b","Type":"ContainerStarted","Data":"58f7186a17a8d936929153955c8b6cd57846e64bd7ae7d91ae066bf6fd80cea0"} Jan 21 11:53:08 crc kubenswrapper[4881]: I0121 11:53:08.310744 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:53:08 crc kubenswrapper[4881]: E0121 11:53:08.313071 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:53:23 crc kubenswrapper[4881]: I0121 11:53:23.321713 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:53:23 crc kubenswrapper[4881]: E0121 11:53:23.322764 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:53:37 crc kubenswrapper[4881]: I0121 11:53:37.311382 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:53:38 crc kubenswrapper[4881]: I0121 11:53:38.286716 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c"} Jan 21 11:53:38 crc kubenswrapper[4881]: I0121 11:53:38.353093 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=37.269352385 podStartE2EDuration="47.353062306s" podCreationTimestamp="2026-01-21 11:52:51 +0000 UTC" firstStartedPulling="2026-01-21 11:52:53.560954844 +0000 UTC m=+3360.820911343" lastFinishedPulling="2026-01-21 11:53:03.644664795 +0000 UTC m=+3370.904621264" observedRunningTime="2026-01-21 11:53:04.893416217 +0000 UTC m=+3372.153372686" watchObservedRunningTime="2026-01-21 11:53:38.353062306 +0000 UTC m=+3405.613018815" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.161232 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.164952 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.176715 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.357757 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvdn\" (UniqueName: \"kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.358233 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.358316 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.461706 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.461817 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.462017 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvdn\" (UniqueName: \"kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.462186 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.462422 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.490456 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvdn\" (UniqueName: \"kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.520427 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:16 crc kubenswrapper[4881]: I0121 11:55:16.085012 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:16 crc kubenswrapper[4881]: I0121 11:55:16.319541 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerStarted","Data":"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c"} Jan 21 11:55:16 crc kubenswrapper[4881]: I0121 11:55:16.319914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerStarted","Data":"fd3efa1bda6f47f00e75d651283a9df00f1ada4385af64dc6875164eac5891bf"} Jan 21 11:55:17 crc kubenswrapper[4881]: I0121 11:55:17.329842 4881 generic.go:334] "Generic (PLEG): container finished" podID="aa68d770-00ce-479d-8638-c321d359f566" containerID="f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c" exitCode=0 Jan 21 11:55:17 crc kubenswrapper[4881]: I0121 11:55:17.329959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerDied","Data":"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c"} Jan 21 11:55:18 crc kubenswrapper[4881]: I0121 11:55:18.342660 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerStarted","Data":"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f"} Jan 21 11:55:20 crc kubenswrapper[4881]: I0121 11:55:20.364176 4881 generic.go:334] "Generic (PLEG): container finished" podID="aa68d770-00ce-479d-8638-c321d359f566" containerID="8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f" exitCode=0 Jan 21 11:55:20 crc kubenswrapper[4881]: I0121 11:55:20.364229 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerDied","Data":"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f"} Jan 21 11:55:21 crc kubenswrapper[4881]: I0121 11:55:21.376995 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerStarted","Data":"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0"} Jan 21 11:55:21 crc kubenswrapper[4881]: I0121 11:55:21.414018 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z67qr" podStartSLOduration=2.937953623 podStartE2EDuration="6.41399635s" podCreationTimestamp="2026-01-21 11:55:15 +0000 UTC" firstStartedPulling="2026-01-21 11:55:17.333066642 +0000 UTC m=+3504.593023111" lastFinishedPulling="2026-01-21 11:55:20.809109359 +0000 UTC m=+3508.069065838" observedRunningTime="2026-01-21 11:55:21.403044942 +0000 UTC m=+3508.663001421" watchObservedRunningTime="2026-01-21 11:55:21.41399635 +0000 UTC m=+3508.673952819" Jan 21 11:55:25 crc kubenswrapper[4881]: I0121 11:55:25.521339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:25 crc kubenswrapper[4881]: I0121 11:55:25.523247 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:25 crc kubenswrapper[4881]: I0121 11:55:25.608583 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:26 crc kubenswrapper[4881]: I0121 11:55:26.619903 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:26 crc kubenswrapper[4881]: I0121 11:55:26.670574 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:28 crc kubenswrapper[4881]: I0121 11:55:28.687963 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z67qr" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="registry-server" containerID="cri-o://b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0" gracePeriod=2 Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.195506 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.291440 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities\") pod \"aa68d770-00ce-479d-8638-c321d359f566\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.291915 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content\") pod \"aa68d770-00ce-479d-8638-c321d359f566\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.292149 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpvdn\" (UniqueName: \"kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn\") pod \"aa68d770-00ce-479d-8638-c321d359f566\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.292610 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities" (OuterVolumeSpecName: "utilities") pod "aa68d770-00ce-479d-8638-c321d359f566" (UID: "aa68d770-00ce-479d-8638-c321d359f566"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.293161 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.300714 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn" (OuterVolumeSpecName: "kube-api-access-jpvdn") pod "aa68d770-00ce-479d-8638-c321d359f566" (UID: "aa68d770-00ce-479d-8638-c321d359f566"). InnerVolumeSpecName "kube-api-access-jpvdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.358388 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa68d770-00ce-479d-8638-c321d359f566" (UID: "aa68d770-00ce-479d-8638-c321d359f566"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.395395 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpvdn\" (UniqueName: \"kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.395690 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.707190 4881 generic.go:334] "Generic (PLEG): container finished" podID="aa68d770-00ce-479d-8638-c321d359f566" containerID="b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0" exitCode=0 Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.707232 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerDied","Data":"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0"} Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.707267 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerDied","Data":"fd3efa1bda6f47f00e75d651283a9df00f1ada4385af64dc6875164eac5891bf"} Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.707286 4881 scope.go:117] "RemoveContainer" containerID="b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.709516 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.765156 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.768147 4881 scope.go:117] "RemoveContainer" containerID="8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.780136 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.805605 4881 scope.go:117] "RemoveContainer" containerID="f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.848997 4881 scope.go:117] "RemoveContainer" containerID="b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0" Jan 21 11:55:29 crc kubenswrapper[4881]: E0121 11:55:29.849666 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0\": container with ID starting with b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0 not found: ID does not exist" containerID="b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.849702 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0"} err="failed to get container status \"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0\": rpc error: code = NotFound desc = could not find container \"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0\": container with ID starting with b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0 not found: ID does not exist" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.849728 4881 scope.go:117] "RemoveContainer" containerID="8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f" Jan 21 11:55:29 crc kubenswrapper[4881]: E0121 11:55:29.850079 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f\": container with ID starting with 8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f not found: ID does not exist" containerID="8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.850105 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f"} err="failed to get container status \"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f\": rpc error: code = NotFound desc = could not find container \"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f\": container with ID starting with 8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f not found: ID does not exist" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.850118 4881 scope.go:117] "RemoveContainer" containerID="f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c" Jan 21 11:55:29 crc kubenswrapper[4881]: E0121 11:55:29.850323 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c\": container with ID starting with f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c not found: ID does not exist" containerID="f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.850343 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c"} err="failed to get container status \"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c\": rpc error: code = NotFound desc = could not find container \"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c\": container with ID starting with f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c not found: ID does not exist" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.323702 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa68d770-00ce-479d-8638-c321d359f566" path="/var/lib/kubelet/pods/aa68d770-00ce-479d-8638-c321d359f566/volumes" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.451676 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:31 crc kubenswrapper[4881]: E0121 11:55:31.452632 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="extract-content" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.455016 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="extract-content" Jan 21 11:55:31 crc kubenswrapper[4881]: E0121 11:55:31.455082 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="registry-server" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.455092 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="registry-server" Jan 21 11:55:31 crc kubenswrapper[4881]: E0121 11:55:31.455205 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="extract-utilities" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.455216 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="extract-utilities" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.455739 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="registry-server" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.457817 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.463592 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.518135 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.518727 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvrlm\" (UniqueName: \"kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.518949 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.620712 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvrlm\" (UniqueName: \"kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.620780 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.620878 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.621340 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.621559 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.644636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvrlm\" (UniqueName: \"kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.779888 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:32 crc kubenswrapper[4881]: I0121 11:55:32.951432 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:33 crc kubenswrapper[4881]: I0121 11:55:33.861439 4881 generic.go:334] "Generic (PLEG): container finished" podID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerID="9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405" exitCode=0 Jan 21 11:55:33 crc kubenswrapper[4881]: I0121 11:55:33.861506 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerDied","Data":"9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405"} Jan 21 11:55:33 crc kubenswrapper[4881]: I0121 11:55:33.861958 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerStarted","Data":"b9cab90a2e43a6bc804312c58baa5fbb4516f350e1ebe2508b8e3bbfc2b6d7ef"} Jan 21 11:55:36 crc kubenswrapper[4881]: I0121 11:55:36.581450 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerStarted","Data":"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235"} Jan 21 11:55:40 crc kubenswrapper[4881]: I0121 11:55:40.627209 4881 generic.go:334] "Generic (PLEG): container finished" podID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerID="cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235" exitCode=0 Jan 21 11:55:40 crc kubenswrapper[4881]: I0121 11:55:40.627328 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerDied","Data":"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235"} Jan 21 11:55:41 crc kubenswrapper[4881]: I0121 11:55:41.638031 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerStarted","Data":"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442"} Jan 21 11:55:41 crc kubenswrapper[4881]: I0121 11:55:41.666832 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-djpvn" podStartSLOduration=3.518684745 podStartE2EDuration="10.66680887s" podCreationTimestamp="2026-01-21 11:55:31 +0000 UTC" firstStartedPulling="2026-01-21 11:55:33.863565419 +0000 UTC m=+3521.123521888" lastFinishedPulling="2026-01-21 11:55:41.011689524 +0000 UTC m=+3528.271646013" observedRunningTime="2026-01-21 11:55:41.660523747 +0000 UTC m=+3528.920480236" watchObservedRunningTime="2026-01-21 11:55:41.66680887 +0000 UTC m=+3528.926765349" Jan 21 11:55:41 crc kubenswrapper[4881]: I0121 11:55:41.780659 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:41 crc kubenswrapper[4881]: I0121 11:55:41.780704 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:42 crc kubenswrapper[4881]: I0121 11:55:42.827505 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-djpvn" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="registry-server" probeResult="failure" output=< Jan 21 11:55:42 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:55:42 crc kubenswrapper[4881]: > Jan 21 11:55:51 crc kubenswrapper[4881]: I0121 11:55:51.835870 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:51 crc kubenswrapper[4881]: I0121 11:55:51.901565 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:52 crc kubenswrapper[4881]: I0121 11:55:52.084753 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:52 crc kubenswrapper[4881]: I0121 11:55:52.896142 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-djpvn" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="registry-server" containerID="cri-o://2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442" gracePeriod=2 Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.381255 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.437627 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvrlm\" (UniqueName: \"kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm\") pod \"5e8058c9-2ffc-461a-98b1-5470103994c8\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.439774 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities\") pod \"5e8058c9-2ffc-461a-98b1-5470103994c8\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.440059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content\") pod \"5e8058c9-2ffc-461a-98b1-5470103994c8\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.440445 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities" (OuterVolumeSpecName: "utilities") pod "5e8058c9-2ffc-461a-98b1-5470103994c8" (UID: "5e8058c9-2ffc-461a-98b1-5470103994c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.440946 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.452861 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm" (OuterVolumeSpecName: "kube-api-access-qvrlm") pod "5e8058c9-2ffc-461a-98b1-5470103994c8" (UID: "5e8058c9-2ffc-461a-98b1-5470103994c8"). InnerVolumeSpecName "kube-api-access-qvrlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.542914 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvrlm\" (UniqueName: \"kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.566880 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e8058c9-2ffc-461a-98b1-5470103994c8" (UID: "5e8058c9-2ffc-461a-98b1-5470103994c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.644562 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.906705 4881 generic.go:334] "Generic (PLEG): container finished" podID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerID="2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442" exitCode=0 Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.906978 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerDied","Data":"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442"} Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.907007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerDied","Data":"b9cab90a2e43a6bc804312c58baa5fbb4516f350e1ebe2508b8e3bbfc2b6d7ef"} Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.907025 4881 scope.go:117] "RemoveContainer" containerID="2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.907163 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.940042 4881 scope.go:117] "RemoveContainer" containerID="cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.962935 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.969226 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.980613 4881 scope.go:117] "RemoveContainer" containerID="9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.015302 4881 scope.go:117] "RemoveContainer" containerID="2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442" Jan 21 11:55:54 crc kubenswrapper[4881]: E0121 11:55:54.016295 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442\": container with ID starting with 2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442 not found: ID does not exist" containerID="2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.016390 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442"} err="failed to get container status \"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442\": rpc error: code = NotFound desc = could not find container \"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442\": container with ID starting with 2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442 not found: ID does not exist" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.016456 4881 scope.go:117] "RemoveContainer" containerID="cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235" Jan 21 11:55:54 crc kubenswrapper[4881]: E0121 11:55:54.017115 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235\": container with ID starting with cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235 not found: ID does not exist" containerID="cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.017168 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235"} err="failed to get container status \"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235\": rpc error: code = NotFound desc = could not find container \"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235\": container with ID starting with cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235 not found: ID does not exist" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.017193 4881 scope.go:117] "RemoveContainer" containerID="9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405" Jan 21 11:55:54 crc kubenswrapper[4881]: E0121 11:55:54.017741 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405\": container with ID starting with 9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405 not found: ID does not exist" containerID="9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.017851 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405"} err="failed to get container status \"9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405\": rpc error: code = NotFound desc = could not find container \"9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405\": container with ID starting with 9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405 not found: ID does not exist" Jan 21 11:55:55 crc kubenswrapper[4881]: I0121 11:55:55.448054 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" path="/var/lib/kubelet/pods/5e8058c9-2ffc-461a-98b1-5470103994c8/volumes" Jan 21 11:55:59 crc kubenswrapper[4881]: I0121 11:55:59.851048 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:55:59 crc kubenswrapper[4881]: I0121 11:55:59.851750 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:56:29 crc kubenswrapper[4881]: I0121 11:56:29.851173 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:56:29 crc kubenswrapper[4881]: I0121 11:56:29.851939 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:56:59 crc kubenswrapper[4881]: I0121 11:56:59.851235 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:56:59 crc kubenswrapper[4881]: I0121 11:56:59.851905 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:56:59 crc kubenswrapper[4881]: I0121 11:56:59.851964 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:56:59 crc kubenswrapper[4881]: I0121 11:56:59.853123 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:56:59 crc kubenswrapper[4881]: I0121 11:56:59.853206 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c" gracePeriod=600 Jan 21 11:57:00 crc kubenswrapper[4881]: I0121 11:57:00.865471 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c" exitCode=0 Jan 21 11:57:00 crc kubenswrapper[4881]: I0121 11:57:00.865553 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c"} Jan 21 11:57:00 crc kubenswrapper[4881]: I0121 11:57:00.866145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9"} Jan 21 11:57:00 crc kubenswrapper[4881]: I0121 11:57:00.866185 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.679007 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:37 crc kubenswrapper[4881]: E0121 11:58:37.680070 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="extract-content" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.680088 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="extract-content" Jan 21 11:58:37 crc kubenswrapper[4881]: E0121 11:58:37.680114 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="extract-utilities" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.680122 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="extract-utilities" Jan 21 11:58:37 crc kubenswrapper[4881]: E0121 11:58:37.680145 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="registry-server" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.680153 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="registry-server" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.680385 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="registry-server" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.682390 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.696165 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.821803 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.821861 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.822122 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grng2\" (UniqueName: \"kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.924553 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.924600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.924697 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grng2\" (UniqueName: \"kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.925232 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.925288 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.951876 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grng2\" (UniqueName: \"kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.007617 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.532937 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.940221 4881 generic.go:334] "Generic (PLEG): container finished" podID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerID="f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb" exitCode=0 Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.940297 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerDied","Data":"f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb"} Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.941005 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerStarted","Data":"a1fc02f86769f942a6122e618b7b58b486e44dad90ff27a3391a9c93979aff84"} Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.942757 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.065440 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.068577 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.102851 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.167717 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftfl7\" (UniqueName: \"kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.167814 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.168150 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.270270 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.270359 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftfl7\" (UniqueName: \"kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.270409 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.270951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.271004 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.291982 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftfl7\" (UniqueName: \"kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.404138 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.952565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerStarted","Data":"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a"} Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.958674 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:39 crc kubenswrapper[4881]: W0121 11:58:39.966426 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13dde7f6_f493_4ebb_ba1c_2ba924f29e23.slice/crio-9dfcfa193e7da807aee026d705aa3db51d60e43a718829318060d2e20313e7c6 WatchSource:0}: Error finding container 9dfcfa193e7da807aee026d705aa3db51d60e43a718829318060d2e20313e7c6: Status 404 returned error can't find the container with id 9dfcfa193e7da807aee026d705aa3db51d60e43a718829318060d2e20313e7c6 Jan 21 11:58:40 crc kubenswrapper[4881]: I0121 11:58:40.964352 4881 generic.go:334] "Generic (PLEG): container finished" podID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerID="798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a" exitCode=0 Jan 21 11:58:40 crc kubenswrapper[4881]: I0121 11:58:40.964419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerDied","Data":"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a"} Jan 21 11:58:40 crc kubenswrapper[4881]: I0121 11:58:40.968433 4881 generic.go:334] "Generic (PLEG): container finished" podID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerID="7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a" exitCode=0 Jan 21 11:58:40 crc kubenswrapper[4881]: I0121 11:58:40.968553 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerDied","Data":"7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a"} Jan 21 11:58:40 crc kubenswrapper[4881]: I0121 11:58:40.968658 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerStarted","Data":"9dfcfa193e7da807aee026d705aa3db51d60e43a718829318060d2e20313e7c6"} Jan 21 11:58:42 crc kubenswrapper[4881]: I0121 11:58:42.043033 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerStarted","Data":"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f"} Jan 21 11:58:42 crc kubenswrapper[4881]: I0121 11:58:42.047924 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerStarted","Data":"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f"} Jan 21 11:58:42 crc kubenswrapper[4881]: I0121 11:58:42.090387 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wglbm" podStartSLOduration=2.633083076 podStartE2EDuration="5.09036361s" podCreationTimestamp="2026-01-21 11:58:37 +0000 UTC" firstStartedPulling="2026-01-21 11:58:38.942424374 +0000 UTC m=+3706.202380843" lastFinishedPulling="2026-01-21 11:58:41.399704888 +0000 UTC m=+3708.659661377" observedRunningTime="2026-01-21 11:58:42.082641242 +0000 UTC m=+3709.342597711" watchObservedRunningTime="2026-01-21 11:58:42.09036361 +0000 UTC m=+3709.350320079" Jan 21 11:58:44 crc kubenswrapper[4881]: I0121 11:58:44.073587 4881 generic.go:334] "Generic (PLEG): container finished" podID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerID="2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f" exitCode=0 Jan 21 11:58:44 crc kubenswrapper[4881]: I0121 11:58:44.073680 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerDied","Data":"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f"} Jan 21 11:58:45 crc kubenswrapper[4881]: I0121 11:58:45.087755 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerStarted","Data":"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765"} Jan 21 11:58:45 crc kubenswrapper[4881]: I0121 11:58:45.112352 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wnld6" podStartSLOduration=2.650694887 podStartE2EDuration="6.112332165s" podCreationTimestamp="2026-01-21 11:58:39 +0000 UTC" firstStartedPulling="2026-01-21 11:58:40.970812618 +0000 UTC m=+3708.230769127" lastFinishedPulling="2026-01-21 11:58:44.432449936 +0000 UTC m=+3711.692406405" observedRunningTime="2026-01-21 11:58:45.107927488 +0000 UTC m=+3712.367883957" watchObservedRunningTime="2026-01-21 11:58:45.112332165 +0000 UTC m=+3712.372288644" Jan 21 11:58:48 crc kubenswrapper[4881]: I0121 11:58:48.008386 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:48 crc kubenswrapper[4881]: I0121 11:58:48.009133 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:48 crc kubenswrapper[4881]: I0121 11:58:48.060753 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:48 crc kubenswrapper[4881]: I0121 11:58:48.188145 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:48 crc kubenswrapper[4881]: I0121 11:58:48.655639 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:49 crc kubenswrapper[4881]: I0121 11:58:49.405277 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:49 crc kubenswrapper[4881]: I0121 11:58:49.405679 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:49 crc kubenswrapper[4881]: I0121 11:58:49.469262 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.151356 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wglbm" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="registry-server" containerID="cri-o://14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f" gracePeriod=2 Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.200642 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.660905 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.797467 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grng2\" (UniqueName: \"kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2\") pod \"0fc62569-566f-4a73-b58a-93ea02e351d5\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.797571 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content\") pod \"0fc62569-566f-4a73-b58a-93ea02e351d5\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.797700 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities\") pod \"0fc62569-566f-4a73-b58a-93ea02e351d5\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.799128 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities" (OuterVolumeSpecName: "utilities") pod "0fc62569-566f-4a73-b58a-93ea02e351d5" (UID: "0fc62569-566f-4a73-b58a-93ea02e351d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.804614 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2" (OuterVolumeSpecName: "kube-api-access-grng2") pod "0fc62569-566f-4a73-b58a-93ea02e351d5" (UID: "0fc62569-566f-4a73-b58a-93ea02e351d5"). InnerVolumeSpecName "kube-api-access-grng2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.821706 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fc62569-566f-4a73-b58a-93ea02e351d5" (UID: "0fc62569-566f-4a73-b58a-93ea02e351d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.901003 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grng2\" (UniqueName: \"kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.901040 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.901049 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.162586 4881 generic.go:334] "Generic (PLEG): container finished" podID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerID="14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f" exitCode=0 Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.162679 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerDied","Data":"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f"} Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.162724 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerDied","Data":"a1fc02f86769f942a6122e618b7b58b486e44dad90ff27a3391a9c93979aff84"} Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.162741 4881 scope.go:117] "RemoveContainer" containerID="14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.162690 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.187600 4881 scope.go:117] "RemoveContainer" containerID="798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.214824 4881 scope.go:117] "RemoveContainer" containerID="f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.219298 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.227932 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.252058 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.282615 4881 scope.go:117] "RemoveContainer" containerID="14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f" Jan 21 11:58:51 crc kubenswrapper[4881]: E0121 11:58:51.283011 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f\": container with ID starting with 14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f not found: ID does not exist" containerID="14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.283049 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f"} err="failed to get container status \"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f\": rpc error: code = NotFound desc = could not find container \"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f\": container with ID starting with 14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f not found: ID does not exist" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.283078 4881 scope.go:117] "RemoveContainer" containerID="798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a" Jan 21 11:58:51 crc kubenswrapper[4881]: E0121 11:58:51.283373 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a\": container with ID starting with 798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a not found: ID does not exist" containerID="798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.283408 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a"} err="failed to get container status \"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a\": rpc error: code = NotFound desc = could not find container \"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a\": container with ID starting with 798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a not found: ID does not exist" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.283427 4881 scope.go:117] "RemoveContainer" containerID="f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb" Jan 21 11:58:51 crc kubenswrapper[4881]: E0121 11:58:51.283752 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb\": container with ID starting with f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb not found: ID does not exist" containerID="f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.283829 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb"} err="failed to get container status \"f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb\": rpc error: code = NotFound desc = could not find container \"f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb\": container with ID starting with f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb not found: ID does not exist" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.327658 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" path="/var/lib/kubelet/pods/0fc62569-566f-4a73-b58a-93ea02e351d5/volumes" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.175479 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wnld6" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="registry-server" containerID="cri-o://290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765" gracePeriod=2 Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.654971 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.846720 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftfl7\" (UniqueName: \"kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7\") pod \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.846967 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content\") pod \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.847087 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities\") pod \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.849956 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities" (OuterVolumeSpecName: "utilities") pod "13dde7f6-f493-4ebb-ba1c-2ba924f29e23" (UID: "13dde7f6-f493-4ebb-ba1c-2ba924f29e23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.878966 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7" (OuterVolumeSpecName: "kube-api-access-ftfl7") pod "13dde7f6-f493-4ebb-ba1c-2ba924f29e23" (UID: "13dde7f6-f493-4ebb-ba1c-2ba924f29e23"). InnerVolumeSpecName "kube-api-access-ftfl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.918111 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13dde7f6-f493-4ebb-ba1c-2ba924f29e23" (UID: "13dde7f6-f493-4ebb-ba1c-2ba924f29e23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.949649 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.949685 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.949698 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftfl7\" (UniqueName: \"kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.215296 4881 generic.go:334] "Generic (PLEG): container finished" podID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerID="290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765" exitCode=0 Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.215339 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerDied","Data":"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765"} Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.215369 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerDied","Data":"9dfcfa193e7da807aee026d705aa3db51d60e43a718829318060d2e20313e7c6"} Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.215400 4881 scope.go:117] "RemoveContainer" containerID="290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.215720 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.239176 4881 scope.go:117] "RemoveContainer" containerID="2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.266933 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.284065 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.287574 4881 scope.go:117] "RemoveContainer" containerID="7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.325258 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" path="/var/lib/kubelet/pods/13dde7f6-f493-4ebb-ba1c-2ba924f29e23/volumes" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.336582 4881 scope.go:117] "RemoveContainer" containerID="290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765" Jan 21 11:58:53 crc kubenswrapper[4881]: E0121 11:58:53.337236 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765\": container with ID starting with 290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765 not found: ID does not exist" containerID="290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.337324 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765"} err="failed to get container status \"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765\": rpc error: code = NotFound desc = could not find container \"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765\": container with ID starting with 290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765 not found: ID does not exist" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.337362 4881 scope.go:117] "RemoveContainer" containerID="2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f" Jan 21 11:58:53 crc kubenswrapper[4881]: E0121 11:58:53.337979 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f\": container with ID starting with 2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f not found: ID does not exist" containerID="2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.338065 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f"} err="failed to get container status \"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f\": rpc error: code = NotFound desc = could not find container \"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f\": container with ID starting with 2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f not found: ID does not exist" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.338130 4881 scope.go:117] "RemoveContainer" containerID="7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a" Jan 21 11:58:53 crc kubenswrapper[4881]: E0121 11:58:53.338814 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a\": container with ID starting with 7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a not found: ID does not exist" containerID="7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.338847 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a"} err="failed to get container status \"7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a\": rpc error: code = NotFound desc = could not find container \"7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a\": container with ID starting with 7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a not found: ID does not exist" Jan 21 11:59:29 crc kubenswrapper[4881]: I0121 11:59:29.851131 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:59:29 crc kubenswrapper[4881]: I0121 11:59:29.851781 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:59:59 crc kubenswrapper[4881]: I0121 11:59:59.850716 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:59:59 crc kubenswrapper[4881]: I0121 11:59:59.851376 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.199913 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn"] Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200456 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="extract-content" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200481 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="extract-content" Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200520 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200529 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200541 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="extract-content" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200550 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="extract-content" Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200566 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200573 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200609 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="extract-utilities" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200618 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="extract-utilities" Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200634 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="extract-utilities" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200642 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="extract-utilities" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200930 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200965 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.202018 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.209975 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.210178 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.211100 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn"] Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.220417 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.220636 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh2lm\" (UniqueName: \"kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.220957 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.323326 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.323405 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh2lm\" (UniqueName: \"kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.323549 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.325032 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.332653 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.340871 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh2lm\" (UniqueName: \"kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.530638 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:01 crc kubenswrapper[4881]: I0121 12:00:01.026632 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn"] Jan 21 12:00:01 crc kubenswrapper[4881]: I0121 12:00:01.975977 4881 generic.go:334] "Generic (PLEG): container finished" podID="e74d3023-7ad9-4e65-9627-cc8127927f6b" containerID="f4fa32143b4e9e742c21ea98ab2bdc72498265c13850a532b1a72e716a34316a" exitCode=0 Jan 21 12:00:01 crc kubenswrapper[4881]: I0121 12:00:01.976371 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" event={"ID":"e74d3023-7ad9-4e65-9627-cc8127927f6b","Type":"ContainerDied","Data":"f4fa32143b4e9e742c21ea98ab2bdc72498265c13850a532b1a72e716a34316a"} Jan 21 12:00:01 crc kubenswrapper[4881]: I0121 12:00:01.976414 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" event={"ID":"e74d3023-7ad9-4e65-9627-cc8127927f6b","Type":"ContainerStarted","Data":"043088683aabf2d418e683c2f01d6f19ffe884d446753df6d19dcbbf4a207932"} Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.385196 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.486920 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume\") pod \"e74d3023-7ad9-4e65-9627-cc8127927f6b\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.487089 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume\") pod \"e74d3023-7ad9-4e65-9627-cc8127927f6b\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.487226 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh2lm\" (UniqueName: \"kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm\") pod \"e74d3023-7ad9-4e65-9627-cc8127927f6b\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.487589 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume" (OuterVolumeSpecName: "config-volume") pod "e74d3023-7ad9-4e65-9627-cc8127927f6b" (UID: "e74d3023-7ad9-4e65-9627-cc8127927f6b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.488243 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.493688 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e74d3023-7ad9-4e65-9627-cc8127927f6b" (UID: "e74d3023-7ad9-4e65-9627-cc8127927f6b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.494695 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm" (OuterVolumeSpecName: "kube-api-access-dh2lm") pod "e74d3023-7ad9-4e65-9627-cc8127927f6b" (UID: "e74d3023-7ad9-4e65-9627-cc8127927f6b"). InnerVolumeSpecName "kube-api-access-dh2lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.589993 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh2lm\" (UniqueName: \"kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm\") on node \"crc\" DevicePath \"\"" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.590033 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.997544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" event={"ID":"e74d3023-7ad9-4e65-9627-cc8127927f6b","Type":"ContainerDied","Data":"043088683aabf2d418e683c2f01d6f19ffe884d446753df6d19dcbbf4a207932"} Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.997594 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043088683aabf2d418e683c2f01d6f19ffe884d446753df6d19dcbbf4a207932" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.997634 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:04 crc kubenswrapper[4881]: I0121 12:00:04.471198 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb"] Jan 21 12:00:04 crc kubenswrapper[4881]: I0121 12:00:04.480893 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb"] Jan 21 12:00:05 crc kubenswrapper[4881]: I0121 12:00:05.332278 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c37f0ee6-fcc1-4663-91a3-ab5e47dad851" path="/var/lib/kubelet/pods/c37f0ee6-fcc1-4663-91a3-ab5e47dad851/volumes" Jan 21 12:00:29 crc kubenswrapper[4881]: I0121 12:00:29.851668 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:00:29 crc kubenswrapper[4881]: I0121 12:00:29.852586 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:00:29 crc kubenswrapper[4881]: I0121 12:00:29.852676 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:00:29 crc kubenswrapper[4881]: I0121 12:00:29.854153 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:00:29 crc kubenswrapper[4881]: I0121 12:00:29.854276 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" gracePeriod=600 Jan 21 12:00:29 crc kubenswrapper[4881]: E0121 12:00:29.986660 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:00:30 crc kubenswrapper[4881]: I0121 12:00:30.257179 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" exitCode=0 Jan 21 12:00:30 crc kubenswrapper[4881]: I0121 12:00:30.257242 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9"} Jan 21 12:00:30 crc kubenswrapper[4881]: I0121 12:00:30.257393 4881 scope.go:117] "RemoveContainer" containerID="0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c" Jan 21 12:00:30 crc kubenswrapper[4881]: I0121 12:00:30.259013 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:00:30 crc kubenswrapper[4881]: E0121 12:00:30.259511 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:00:32 crc kubenswrapper[4881]: I0121 12:00:32.599010 4881 scope.go:117] "RemoveContainer" containerID="4ef110f660eb1c97d787ba6c2683b1ded92c0cd6a25a9dac3c9da2e19fd3d06a" Jan 21 12:00:43 crc kubenswrapper[4881]: I0121 12:00:43.318083 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:00:43 crc kubenswrapper[4881]: E0121 12:00:43.319001 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:00:57 crc kubenswrapper[4881]: I0121 12:00:57.310768 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:00:57 crc kubenswrapper[4881]: E0121 12:00:57.311805 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.182483 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483281-5vf4h"] Jan 21 12:01:00 crc kubenswrapper[4881]: E0121 12:01:00.184310 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74d3023-7ad9-4e65-9627-cc8127927f6b" containerName="collect-profiles" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.184338 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74d3023-7ad9-4e65-9627-cc8127927f6b" containerName="collect-profiles" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.184719 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e74d3023-7ad9-4e65-9627-cc8127927f6b" containerName="collect-profiles" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.186242 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.195973 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483281-5vf4h"] Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.280243 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcjvz\" (UniqueName: \"kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.280428 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.280509 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.280638 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.382363 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.382482 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.382560 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.382624 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcjvz\" (UniqueName: \"kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.392104 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.392170 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.392213 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.407076 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcjvz\" (UniqueName: \"kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.505711 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:01 crc kubenswrapper[4881]: I0121 12:01:01.026400 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483281-5vf4h"] Jan 21 12:01:01 crc kubenswrapper[4881]: I0121 12:01:01.724488 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-5vf4h" event={"ID":"d4b92750-a75d-44b9-b0ba-75296371fc59","Type":"ContainerStarted","Data":"be33628a74d9a97066f006dffffcfca1b14cc440a7bf9af3ccb2aba1319485a7"} Jan 21 12:01:01 crc kubenswrapper[4881]: I0121 12:01:01.724864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-5vf4h" event={"ID":"d4b92750-a75d-44b9-b0ba-75296371fc59","Type":"ContainerStarted","Data":"1cc079d49d1423ee4e1244a5c9cc50e50531364c616afdcfe5ffebdfd0abd447"} Jan 21 12:01:01 crc kubenswrapper[4881]: I0121 12:01:01.756910 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483281-5vf4h" podStartSLOduration=1.756885741 podStartE2EDuration="1.756885741s" podCreationTimestamp="2026-01-21 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 12:01:01.746589439 +0000 UTC m=+3849.006545928" watchObservedRunningTime="2026-01-21 12:01:01.756885741 +0000 UTC m=+3849.016842210" Jan 21 12:01:05 crc kubenswrapper[4881]: I0121 12:01:05.768201 4881 generic.go:334] "Generic (PLEG): container finished" podID="d4b92750-a75d-44b9-b0ba-75296371fc59" containerID="be33628a74d9a97066f006dffffcfca1b14cc440a7bf9af3ccb2aba1319485a7" exitCode=0 Jan 21 12:01:05 crc kubenswrapper[4881]: I0121 12:01:05.768273 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-5vf4h" event={"ID":"d4b92750-a75d-44b9-b0ba-75296371fc59","Type":"ContainerDied","Data":"be33628a74d9a97066f006dffffcfca1b14cc440a7bf9af3ccb2aba1319485a7"} Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.213324 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.245727 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys\") pod \"d4b92750-a75d-44b9-b0ba-75296371fc59\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.246004 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data\") pod \"d4b92750-a75d-44b9-b0ba-75296371fc59\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.246043 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcjvz\" (UniqueName: \"kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz\") pod \"d4b92750-a75d-44b9-b0ba-75296371fc59\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.246137 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle\") pod \"d4b92750-a75d-44b9-b0ba-75296371fc59\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.265160 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz" (OuterVolumeSpecName: "kube-api-access-pcjvz") pod "d4b92750-a75d-44b9-b0ba-75296371fc59" (UID: "d4b92750-a75d-44b9-b0ba-75296371fc59"). InnerVolumeSpecName "kube-api-access-pcjvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.275639 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d4b92750-a75d-44b9-b0ba-75296371fc59" (UID: "d4b92750-a75d-44b9-b0ba-75296371fc59"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.287174 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4b92750-a75d-44b9-b0ba-75296371fc59" (UID: "d4b92750-a75d-44b9-b0ba-75296371fc59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.319413 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data" (OuterVolumeSpecName: "config-data") pod "d4b92750-a75d-44b9-b0ba-75296371fc59" (UID: "d4b92750-a75d-44b9-b0ba-75296371fc59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.351147 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.351218 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcjvz\" (UniqueName: \"kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.351252 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.351284 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.790513 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-5vf4h" event={"ID":"d4b92750-a75d-44b9-b0ba-75296371fc59","Type":"ContainerDied","Data":"1cc079d49d1423ee4e1244a5c9cc50e50531364c616afdcfe5ffebdfd0abd447"} Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.790577 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cc079d49d1423ee4e1244a5c9cc50e50531364c616afdcfe5ffebdfd0abd447" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.790585 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:11 crc kubenswrapper[4881]: I0121 12:01:11.310544 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:01:11 crc kubenswrapper[4881]: E0121 12:01:11.311314 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:01:25 crc kubenswrapper[4881]: I0121 12:01:25.315382 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:01:25 crc kubenswrapper[4881]: E0121 12:01:25.316634 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:01:37 crc kubenswrapper[4881]: I0121 12:01:37.311238 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:01:37 crc kubenswrapper[4881]: E0121 12:01:37.312065 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:01:52 crc kubenswrapper[4881]: I0121 12:01:52.310567 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:01:52 crc kubenswrapper[4881]: E0121 12:01:52.311428 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:02:06 crc kubenswrapper[4881]: I0121 12:02:06.312191 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:02:06 crc kubenswrapper[4881]: E0121 12:02:06.313271 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:02:18 crc kubenswrapper[4881]: I0121 12:02:18.311174 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:02:18 crc kubenswrapper[4881]: E0121 12:02:18.311960 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:02:33 crc kubenswrapper[4881]: I0121 12:02:33.317107 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:02:33 crc kubenswrapper[4881]: E0121 12:02:33.317632 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:02:47 crc kubenswrapper[4881]: I0121 12:02:47.311114 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:02:47 crc kubenswrapper[4881]: E0121 12:02:47.311928 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:00 crc kubenswrapper[4881]: I0121 12:03:00.311532 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:00 crc kubenswrapper[4881]: E0121 12:03:00.312360 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:12 crc kubenswrapper[4881]: I0121 12:03:12.311335 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:12 crc kubenswrapper[4881]: E0121 12:03:12.312313 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:23 crc kubenswrapper[4881]: I0121 12:03:23.317868 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:23 crc kubenswrapper[4881]: E0121 12:03:23.320483 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:34 crc kubenswrapper[4881]: I0121 12:03:34.311877 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:34 crc kubenswrapper[4881]: E0121 12:03:34.312904 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:47 crc kubenswrapper[4881]: I0121 12:03:47.310285 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:47 crc kubenswrapper[4881]: E0121 12:03:47.312257 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:58 crc kubenswrapper[4881]: I0121 12:03:58.311683 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:58 crc kubenswrapper[4881]: E0121 12:03:58.312694 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:04:11 crc kubenswrapper[4881]: I0121 12:04:11.311945 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:04:11 crc kubenswrapper[4881]: E0121 12:04:11.313043 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:04:25 crc kubenswrapper[4881]: I0121 12:04:25.310979 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:04:25 crc kubenswrapper[4881]: E0121 12:04:25.312042 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:04:36 crc kubenswrapper[4881]: I0121 12:04:36.311230 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:04:36 crc kubenswrapper[4881]: E0121 12:04:36.313971 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:04:50 crc kubenswrapper[4881]: I0121 12:04:50.311021 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:04:50 crc kubenswrapper[4881]: E0121 12:04:50.311750 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:05:04 crc kubenswrapper[4881]: I0121 12:05:04.311976 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:05:04 crc kubenswrapper[4881]: E0121 12:05:04.312951 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:05:16 crc kubenswrapper[4881]: I0121 12:05:16.312006 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:05:16 crc kubenswrapper[4881]: E0121 12:05:16.313336 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:05:27 crc kubenswrapper[4881]: I0121 12:05:27.311176 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:05:27 crc kubenswrapper[4881]: E0121 12:05:27.311852 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:05:42 crc kubenswrapper[4881]: I0121 12:05:42.310896 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:05:43 crc kubenswrapper[4881]: I0121 12:05:43.060303 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318"} Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.076214 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:04 crc kubenswrapper[4881]: E0121 12:06:04.077381 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b92750-a75d-44b9-b0ba-75296371fc59" containerName="keystone-cron" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.077396 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b92750-a75d-44b9-b0ba-75296371fc59" containerName="keystone-cron" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.077623 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b92750-a75d-44b9-b0ba-75296371fc59" containerName="keystone-cron" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.079392 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.082379 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.082682 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsmq\" (UniqueName: \"kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.082734 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.094383 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.185551 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.185734 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpsmq\" (UniqueName: \"kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.185768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.186328 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.186462 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.211936 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpsmq\" (UniqueName: \"kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.410023 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.922883 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:04 crc kubenswrapper[4881]: W0121 12:06:04.926062 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b9e1b23_382c_4857_9ffa_0106af9afaa8.slice/crio-25513ebd94ad748a797e6b5332f9cbb867e4bc462face6f0fc3b7ed4e0ed1504 WatchSource:0}: Error finding container 25513ebd94ad748a797e6b5332f9cbb867e4bc462face6f0fc3b7ed4e0ed1504: Status 404 returned error can't find the container with id 25513ebd94ad748a797e6b5332f9cbb867e4bc462face6f0fc3b7ed4e0ed1504 Jan 21 12:06:05 crc kubenswrapper[4881]: I0121 12:06:05.616653 4881 generic.go:334] "Generic (PLEG): container finished" podID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerID="309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65" exitCode=0 Jan 21 12:06:05 crc kubenswrapper[4881]: I0121 12:06:05.616882 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerDied","Data":"309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65"} Jan 21 12:06:05 crc kubenswrapper[4881]: I0121 12:06:05.616909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerStarted","Data":"25513ebd94ad748a797e6b5332f9cbb867e4bc462face6f0fc3b7ed4e0ed1504"} Jan 21 12:06:05 crc kubenswrapper[4881]: I0121 12:06:05.619394 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:06:06 crc kubenswrapper[4881]: I0121 12:06:06.629875 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerStarted","Data":"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db"} Jan 21 12:06:10 crc kubenswrapper[4881]: I0121 12:06:10.678123 4881 generic.go:334] "Generic (PLEG): container finished" podID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerID="fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db" exitCode=0 Jan 21 12:06:10 crc kubenswrapper[4881]: I0121 12:06:10.678271 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerDied","Data":"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db"} Jan 21 12:06:11 crc kubenswrapper[4881]: I0121 12:06:11.691799 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerStarted","Data":"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b"} Jan 21 12:06:11 crc kubenswrapper[4881]: I0121 12:06:11.719180 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qnhh2" podStartSLOduration=2.283024034 podStartE2EDuration="7.71913413s" podCreationTimestamp="2026-01-21 12:06:04 +0000 UTC" firstStartedPulling="2026-01-21 12:06:05.619135328 +0000 UTC m=+4152.879091797" lastFinishedPulling="2026-01-21 12:06:11.055245384 +0000 UTC m=+4158.315201893" observedRunningTime="2026-01-21 12:06:11.707037309 +0000 UTC m=+4158.966993788" watchObservedRunningTime="2026-01-21 12:06:11.71913413 +0000 UTC m=+4158.979090599" Jan 21 12:06:14 crc kubenswrapper[4881]: I0121 12:06:14.411566 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:14 crc kubenswrapper[4881]: I0121 12:06:14.412088 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:15 crc kubenswrapper[4881]: I0121 12:06:15.501859 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qnhh2" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="registry-server" probeResult="failure" output=< Jan 21 12:06:15 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 12:06:15 crc kubenswrapper[4881]: > Jan 21 12:06:24 crc kubenswrapper[4881]: I0121 12:06:24.468727 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:24 crc kubenswrapper[4881]: I0121 12:06:24.528224 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:24 crc kubenswrapper[4881]: I0121 12:06:24.714572 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:25 crc kubenswrapper[4881]: I0121 12:06:25.840553 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qnhh2" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="registry-server" containerID="cri-o://2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b" gracePeriod=2 Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.340715 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.449747 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content\") pod \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.449849 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpsmq\" (UniqueName: \"kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq\") pod \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.449879 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities\") pod \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.451335 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities" (OuterVolumeSpecName: "utilities") pod "7b9e1b23-382c-4857-9ffa-0106af9afaa8" (UID: "7b9e1b23-382c-4857-9ffa-0106af9afaa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.459880 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq" (OuterVolumeSpecName: "kube-api-access-qpsmq") pod "7b9e1b23-382c-4857-9ffa-0106af9afaa8" (UID: "7b9e1b23-382c-4857-9ffa-0106af9afaa8"). InnerVolumeSpecName "kube-api-access-qpsmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.556394 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpsmq\" (UniqueName: \"kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.556691 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.628167 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b9e1b23-382c-4857-9ffa-0106af9afaa8" (UID: "7b9e1b23-382c-4857-9ffa-0106af9afaa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.658811 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.852141 4881 generic.go:334] "Generic (PLEG): container finished" podID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerID="2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b" exitCode=0 Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.852193 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerDied","Data":"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b"} Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.852240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerDied","Data":"25513ebd94ad748a797e6b5332f9cbb867e4bc462face6f0fc3b7ed4e0ed1504"} Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.852268 4881 scope.go:117] "RemoveContainer" containerID="2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.853017 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.891652 4881 scope.go:117] "RemoveContainer" containerID="fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.893082 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.909148 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.925425 4881 scope.go:117] "RemoveContainer" containerID="309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.968734 4881 scope.go:117] "RemoveContainer" containerID="2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b" Jan 21 12:06:26 crc kubenswrapper[4881]: E0121 12:06:26.969418 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b\": container with ID starting with 2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b not found: ID does not exist" containerID="2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.969542 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b"} err="failed to get container status \"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b\": rpc error: code = NotFound desc = could not find container \"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b\": container with ID starting with 2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b not found: ID does not exist" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.969644 4881 scope.go:117] "RemoveContainer" containerID="fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db" Jan 21 12:06:26 crc kubenswrapper[4881]: E0121 12:06:26.970182 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db\": container with ID starting with fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db not found: ID does not exist" containerID="fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.970214 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db"} err="failed to get container status \"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db\": rpc error: code = NotFound desc = could not find container \"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db\": container with ID starting with fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db not found: ID does not exist" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.970237 4881 scope.go:117] "RemoveContainer" containerID="309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65" Jan 21 12:06:26 crc kubenswrapper[4881]: E0121 12:06:26.970573 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65\": container with ID starting with 309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65 not found: ID does not exist" containerID="309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.970674 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65"} err="failed to get container status \"309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65\": rpc error: code = NotFound desc = could not find container \"309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65\": container with ID starting with 309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65 not found: ID does not exist" Jan 21 12:06:27 crc kubenswrapper[4881]: I0121 12:06:27.322410 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" path="/var/lib/kubelet/pods/7b9e1b23-382c-4857-9ffa-0106af9afaa8/volumes" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.447057 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:38 crc kubenswrapper[4881]: E0121 12:06:38.448121 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="extract-utilities" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.448142 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="extract-utilities" Jan 21 12:06:38 crc kubenswrapper[4881]: E0121 12:06:38.448206 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="registry-server" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.448219 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="registry-server" Jan 21 12:06:38 crc kubenswrapper[4881]: E0121 12:06:38.448257 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="extract-content" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.448293 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="extract-content" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.448846 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="registry-server" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.451953 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.468732 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.636232 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.636473 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.636624 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v77g4\" (UniqueName: \"kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.738252 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.738433 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v77g4\" (UniqueName: \"kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.738780 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.739060 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.739435 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.760368 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v77g4\" (UniqueName: \"kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.789156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:39 crc kubenswrapper[4881]: I0121 12:06:39.335759 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:39 crc kubenswrapper[4881]: W0121 12:06:39.348247 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5462fac8_b03c_48c0_bc3d_b1a1b1285cab.slice/crio-f1af1bbc46ba691c69bc616913a216b385badd2ac173c74fb7757e7c43387e8d WatchSource:0}: Error finding container f1af1bbc46ba691c69bc616913a216b385badd2ac173c74fb7757e7c43387e8d: Status 404 returned error can't find the container with id f1af1bbc46ba691c69bc616913a216b385badd2ac173c74fb7757e7c43387e8d Jan 21 12:06:40 crc kubenswrapper[4881]: I0121 12:06:40.015350 4881 generic.go:334] "Generic (PLEG): container finished" podID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerID="e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9" exitCode=0 Jan 21 12:06:40 crc kubenswrapper[4881]: I0121 12:06:40.015439 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerDied","Data":"e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9"} Jan 21 12:06:40 crc kubenswrapper[4881]: I0121 12:06:40.015745 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerStarted","Data":"f1af1bbc46ba691c69bc616913a216b385badd2ac173c74fb7757e7c43387e8d"} Jan 21 12:06:41 crc kubenswrapper[4881]: I0121 12:06:41.029324 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerStarted","Data":"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084"} Jan 21 12:06:42 crc kubenswrapper[4881]: I0121 12:06:42.043919 4881 generic.go:334] "Generic (PLEG): container finished" podID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerID="4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084" exitCode=0 Jan 21 12:06:42 crc kubenswrapper[4881]: I0121 12:06:42.044033 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerDied","Data":"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084"} Jan 21 12:06:43 crc kubenswrapper[4881]: I0121 12:06:43.058593 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerStarted","Data":"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e"} Jan 21 12:06:43 crc kubenswrapper[4881]: I0121 12:06:43.082153 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j4cbb" podStartSLOduration=2.513594721 podStartE2EDuration="5.08212907s" podCreationTimestamp="2026-01-21 12:06:38 +0000 UTC" firstStartedPulling="2026-01-21 12:06:40.017183714 +0000 UTC m=+4187.277140183" lastFinishedPulling="2026-01-21 12:06:42.585718043 +0000 UTC m=+4189.845674532" observedRunningTime="2026-01-21 12:06:43.08166451 +0000 UTC m=+4190.341620999" watchObservedRunningTime="2026-01-21 12:06:43.08212907 +0000 UTC m=+4190.342085549" Jan 21 12:06:48 crc kubenswrapper[4881]: I0121 12:06:48.789997 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:48 crc kubenswrapper[4881]: I0121 12:06:48.791209 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:48 crc kubenswrapper[4881]: I0121 12:06:48.835810 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:49 crc kubenswrapper[4881]: I0121 12:06:49.222381 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:49 crc kubenswrapper[4881]: I0121 12:06:49.294133 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.136610 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j4cbb" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="registry-server" containerID="cri-o://7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e" gracePeriod=2 Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.694437 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.845679 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content\") pod \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.845876 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v77g4\" (UniqueName: \"kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4\") pod \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.845968 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities\") pod \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.846955 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities" (OuterVolumeSpecName: "utilities") pod "5462fac8-b03c-48c0-bc3d-b1a1b1285cab" (UID: "5462fac8-b03c-48c0-bc3d-b1a1b1285cab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.861290 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4" (OuterVolumeSpecName: "kube-api-access-v77g4") pod "5462fac8-b03c-48c0-bc3d-b1a1b1285cab" (UID: "5462fac8-b03c-48c0-bc3d-b1a1b1285cab"). InnerVolumeSpecName "kube-api-access-v77g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.947911 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v77g4\" (UniqueName: \"kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.947967 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.154404 4881 generic.go:334] "Generic (PLEG): container finished" podID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerID="7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e" exitCode=0 Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.154534 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerDied","Data":"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e"} Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.154564 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.154595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerDied","Data":"f1af1bbc46ba691c69bc616913a216b385badd2ac173c74fb7757e7c43387e8d"} Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.154618 4881 scope.go:117] "RemoveContainer" containerID="7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.181374 4881 scope.go:117] "RemoveContainer" containerID="4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.207546 4881 scope.go:117] "RemoveContainer" containerID="e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.284410 4881 scope.go:117] "RemoveContainer" containerID="7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e" Jan 21 12:06:52 crc kubenswrapper[4881]: E0121 12:06:52.284956 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e\": container with ID starting with 7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e not found: ID does not exist" containerID="7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.284999 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e"} err="failed to get container status \"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e\": rpc error: code = NotFound desc = could not find container \"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e\": container with ID starting with 7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e not found: ID does not exist" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.285029 4881 scope.go:117] "RemoveContainer" containerID="4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084" Jan 21 12:06:52 crc kubenswrapper[4881]: E0121 12:06:52.285540 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084\": container with ID starting with 4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084 not found: ID does not exist" containerID="4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.285571 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084"} err="failed to get container status \"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084\": rpc error: code = NotFound desc = could not find container \"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084\": container with ID starting with 4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084 not found: ID does not exist" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.285595 4881 scope.go:117] "RemoveContainer" containerID="e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9" Jan 21 12:06:52 crc kubenswrapper[4881]: E0121 12:06:52.285876 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9\": container with ID starting with e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9 not found: ID does not exist" containerID="e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.285920 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9"} err="failed to get container status \"e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9\": rpc error: code = NotFound desc = could not find container \"e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9\": container with ID starting with e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9 not found: ID does not exist" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.468750 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5462fac8-b03c-48c0-bc3d-b1a1b1285cab" (UID: "5462fac8-b03c-48c0-bc3d-b1a1b1285cab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.562464 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.792958 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.801258 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:53 crc kubenswrapper[4881]: I0121 12:06:53.327851 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" path="/var/lib/kubelet/pods/5462fac8-b03c-48c0-bc3d-b1a1b1285cab/volumes" Jan 21 12:07:59 crc kubenswrapper[4881]: I0121 12:07:59.850918 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:07:59 crc kubenswrapper[4881]: I0121 12:07:59.852070 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:08:29 crc kubenswrapper[4881]: I0121 12:08:29.851633 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:08:29 crc kubenswrapper[4881]: I0121 12:08:29.852977 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.096477 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:08:55 crc kubenswrapper[4881]: E0121 12:08:55.097500 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="extract-content" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.097516 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="extract-content" Jan 21 12:08:55 crc kubenswrapper[4881]: E0121 12:08:55.097531 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="registry-server" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.097537 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="registry-server" Jan 21 12:08:55 crc kubenswrapper[4881]: E0121 12:08:55.097554 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="extract-utilities" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.097562 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="extract-utilities" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.097814 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="registry-server" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.099735 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.112916 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.213637 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.213863 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnswq\" (UniqueName: \"kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.213898 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.316222 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnswq\" (UniqueName: \"kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.316279 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.316383 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.317150 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.318392 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.348731 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnswq\" (UniqueName: \"kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.426573 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:56 crc kubenswrapper[4881]: W0121 12:08:56.074009 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae7c44c_9f78_4779_bff2_32f7e9246561.slice/crio-3ed9f13914506207c9422f141b24b636ccc90d4691f6a58a623a8afce4a6435c WatchSource:0}: Error finding container 3ed9f13914506207c9422f141b24b636ccc90d4691f6a58a623a8afce4a6435c: Status 404 returned error can't find the container with id 3ed9f13914506207c9422f141b24b636ccc90d4691f6a58a623a8afce4a6435c Jan 21 12:08:56 crc kubenswrapper[4881]: I0121 12:08:56.074275 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:08:56 crc kubenswrapper[4881]: I0121 12:08:56.575767 4881 generic.go:334] "Generic (PLEG): container finished" podID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerID="e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a" exitCode=0 Jan 21 12:08:56 crc kubenswrapper[4881]: I0121 12:08:56.575887 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerDied","Data":"e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a"} Jan 21 12:08:56 crc kubenswrapper[4881]: I0121 12:08:56.577154 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerStarted","Data":"3ed9f13914506207c9422f141b24b636ccc90d4691f6a58a623a8afce4a6435c"} Jan 21 12:08:57 crc kubenswrapper[4881]: I0121 12:08:57.589354 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerStarted","Data":"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9"} Jan 21 12:08:58 crc kubenswrapper[4881]: I0121 12:08:58.600768 4881 generic.go:334] "Generic (PLEG): container finished" podID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerID="33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9" exitCode=0 Jan 21 12:08:58 crc kubenswrapper[4881]: I0121 12:08:58.600873 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerDied","Data":"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9"} Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.462469 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.465202 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.474564 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.579359 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.579461 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8sg\" (UniqueName: \"kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.580093 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.627286 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerStarted","Data":"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c"} Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.649758 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7w9td" podStartSLOduration=2.234592172 podStartE2EDuration="4.649738831s" podCreationTimestamp="2026-01-21 12:08:55 +0000 UTC" firstStartedPulling="2026-01-21 12:08:56.57919603 +0000 UTC m=+4323.839152499" lastFinishedPulling="2026-01-21 12:08:58.994342699 +0000 UTC m=+4326.254299158" observedRunningTime="2026-01-21 12:08:59.645058928 +0000 UTC m=+4326.905015397" watchObservedRunningTime="2026-01-21 12:08:59.649738831 +0000 UTC m=+4326.909695300" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.682677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.682845 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.682937 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p8sg\" (UniqueName: \"kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.683324 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.683356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.705768 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p8sg\" (UniqueName: \"kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.788083 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.855234 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.855306 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.855358 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.856294 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.856362 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318" gracePeriod=600 Jan 21 12:09:00 crc kubenswrapper[4881]: W0121 12:09:00.425001 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ca80118_375e_4587_af3d_453c7aef306d.slice/crio-31cd2d8ee9e30c576f18af6af28a532b366882fea3d8d9cdfbf767da46a002fb WatchSource:0}: Error finding container 31cd2d8ee9e30c576f18af6af28a532b366882fea3d8d9cdfbf767da46a002fb: Status 404 returned error can't find the container with id 31cd2d8ee9e30c576f18af6af28a532b366882fea3d8d9cdfbf767da46a002fb Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.431496 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.640459 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318" exitCode=0 Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.640665 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318"} Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.640841 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c"} Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.640866 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.644303 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerStarted","Data":"31cd2d8ee9e30c576f18af6af28a532b366882fea3d8d9cdfbf767da46a002fb"} Jan 21 12:09:01 crc kubenswrapper[4881]: I0121 12:09:01.658362 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ca80118-375e-4587-af3d-453c7aef306d" containerID="8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55" exitCode=0 Jan 21 12:09:01 crc kubenswrapper[4881]: I0121 12:09:01.658491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerDied","Data":"8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55"} Jan 21 12:09:02 crc kubenswrapper[4881]: I0121 12:09:02.676716 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerStarted","Data":"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22"} Jan 21 12:09:04 crc kubenswrapper[4881]: I0121 12:09:04.700769 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ca80118-375e-4587-af3d-453c7aef306d" containerID="6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22" exitCode=0 Jan 21 12:09:04 crc kubenswrapper[4881]: I0121 12:09:04.700841 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerDied","Data":"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22"} Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.426997 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.427667 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.479704 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.713132 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerStarted","Data":"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f"} Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.734048 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m69zl" podStartSLOduration=3.232060341 podStartE2EDuration="6.734029465s" podCreationTimestamp="2026-01-21 12:08:59 +0000 UTC" firstStartedPulling="2026-01-21 12:09:01.660707446 +0000 UTC m=+4328.920663925" lastFinishedPulling="2026-01-21 12:09:05.16267658 +0000 UTC m=+4332.422633049" observedRunningTime="2026-01-21 12:09:05.730247214 +0000 UTC m=+4332.990203683" watchObservedRunningTime="2026-01-21 12:09:05.734029465 +0000 UTC m=+4332.993985934" Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.779770 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:07 crc kubenswrapper[4881]: I0121 12:09:07.858893 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:09:07 crc kubenswrapper[4881]: I0121 12:09:07.859651 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7w9td" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="registry-server" containerID="cri-o://6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c" gracePeriod=2 Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.395745 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.509927 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities\") pod \"9ae7c44c-9f78-4779-bff2-32f7e9246561\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.510074 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content\") pod \"9ae7c44c-9f78-4779-bff2-32f7e9246561\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.510212 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnswq\" (UniqueName: \"kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq\") pod \"9ae7c44c-9f78-4779-bff2-32f7e9246561\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.511076 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities" (OuterVolumeSpecName: "utilities") pod "9ae7c44c-9f78-4779-bff2-32f7e9246561" (UID: "9ae7c44c-9f78-4779-bff2-32f7e9246561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.516154 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq" (OuterVolumeSpecName: "kube-api-access-tnswq") pod "9ae7c44c-9f78-4779-bff2-32f7e9246561" (UID: "9ae7c44c-9f78-4779-bff2-32f7e9246561"). InnerVolumeSpecName "kube-api-access-tnswq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.539221 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ae7c44c-9f78-4779-bff2-32f7e9246561" (UID: "9ae7c44c-9f78-4779-bff2-32f7e9246561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.612474 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnswq\" (UniqueName: \"kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.612514 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.612527 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.746533 4881 generic.go:334] "Generic (PLEG): container finished" podID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerID="6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c" exitCode=0 Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.746580 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerDied","Data":"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c"} Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.746616 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerDied","Data":"3ed9f13914506207c9422f141b24b636ccc90d4691f6a58a623a8afce4a6435c"} Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.746624 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.746635 4881 scope.go:117] "RemoveContainer" containerID="6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.776323 4881 scope.go:117] "RemoveContainer" containerID="33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.802969 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.805595 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.825024 4881 scope.go:117] "RemoveContainer" containerID="e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.860114 4881 scope.go:117] "RemoveContainer" containerID="6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c" Jan 21 12:09:08 crc kubenswrapper[4881]: E0121 12:09:08.860595 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c\": container with ID starting with 6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c not found: ID does not exist" containerID="6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.860635 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c"} err="failed to get container status \"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c\": rpc error: code = NotFound desc = could not find container \"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c\": container with ID starting with 6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c not found: ID does not exist" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.860659 4881 scope.go:117] "RemoveContainer" containerID="33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9" Jan 21 12:09:08 crc kubenswrapper[4881]: E0121 12:09:08.861050 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9\": container with ID starting with 33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9 not found: ID does not exist" containerID="33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.861075 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9"} err="failed to get container status \"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9\": rpc error: code = NotFound desc = could not find container \"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9\": container with ID starting with 33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9 not found: ID does not exist" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.861092 4881 scope.go:117] "RemoveContainer" containerID="e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a" Jan 21 12:09:08 crc kubenswrapper[4881]: E0121 12:09:08.861349 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a\": container with ID starting with e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a not found: ID does not exist" containerID="e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.861375 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a"} err="failed to get container status \"e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a\": rpc error: code = NotFound desc = could not find container \"e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a\": container with ID starting with e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a not found: ID does not exist" Jan 21 12:09:09 crc kubenswrapper[4881]: I0121 12:09:09.322272 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" path="/var/lib/kubelet/pods/9ae7c44c-9f78-4779-bff2-32f7e9246561/volumes" Jan 21 12:09:09 crc kubenswrapper[4881]: I0121 12:09:09.789024 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:09 crc kubenswrapper[4881]: I0121 12:09:09.789686 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:09 crc kubenswrapper[4881]: I0121 12:09:09.853543 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:10 crc kubenswrapper[4881]: I0121 12:09:10.877650 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:11 crc kubenswrapper[4881]: I0121 12:09:11.259553 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:09:12 crc kubenswrapper[4881]: I0121 12:09:12.791907 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m69zl" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="registry-server" containerID="cri-o://1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f" gracePeriod=2 Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.601906 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.734459 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content\") pod \"1ca80118-375e-4587-af3d-453c7aef306d\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.734620 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities\") pod \"1ca80118-375e-4587-af3d-453c7aef306d\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.734663 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p8sg\" (UniqueName: \"kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg\") pod \"1ca80118-375e-4587-af3d-453c7aef306d\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.736050 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities" (OuterVolumeSpecName: "utilities") pod "1ca80118-375e-4587-af3d-453c7aef306d" (UID: "1ca80118-375e-4587-af3d-453c7aef306d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.743118 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg" (OuterVolumeSpecName: "kube-api-access-2p8sg") pod "1ca80118-375e-4587-af3d-453c7aef306d" (UID: "1ca80118-375e-4587-af3d-453c7aef306d"). InnerVolumeSpecName "kube-api-access-2p8sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.808025 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ca80118-375e-4587-af3d-453c7aef306d" containerID="1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f" exitCode=0 Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.808086 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerDied","Data":"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f"} Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.808125 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerDied","Data":"31cd2d8ee9e30c576f18af6af28a532b366882fea3d8d9cdfbf767da46a002fb"} Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.808149 4881 scope.go:117] "RemoveContainer" containerID="1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.808089 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.809903 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ca80118-375e-4587-af3d-453c7aef306d" (UID: "1ca80118-375e-4587-af3d-453c7aef306d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.829007 4881 scope.go:117] "RemoveContainer" containerID="6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.837421 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.837464 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.837479 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p8sg\" (UniqueName: \"kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.850176 4881 scope.go:117] "RemoveContainer" containerID="8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.902591 4881 scope.go:117] "RemoveContainer" containerID="1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f" Jan 21 12:09:13 crc kubenswrapper[4881]: E0121 12:09:13.903446 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f\": container with ID starting with 1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f not found: ID does not exist" containerID="1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.903503 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f"} err="failed to get container status \"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f\": rpc error: code = NotFound desc = could not find container \"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f\": container with ID starting with 1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f not found: ID does not exist" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.903536 4881 scope.go:117] "RemoveContainer" containerID="6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22" Jan 21 12:09:13 crc kubenswrapper[4881]: E0121 12:09:13.903954 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22\": container with ID starting with 6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22 not found: ID does not exist" containerID="6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.903986 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22"} err="failed to get container status \"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22\": rpc error: code = NotFound desc = could not find container \"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22\": container with ID starting with 6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22 not found: ID does not exist" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.903999 4881 scope.go:117] "RemoveContainer" containerID="8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55" Jan 21 12:09:13 crc kubenswrapper[4881]: E0121 12:09:13.904308 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55\": container with ID starting with 8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55 not found: ID does not exist" containerID="8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.904347 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55"} err="failed to get container status \"8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55\": rpc error: code = NotFound desc = could not find container \"8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55\": container with ID starting with 8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55 not found: ID does not exist" Jan 21 12:09:14 crc kubenswrapper[4881]: I0121 12:09:14.144856 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:09:14 crc kubenswrapper[4881]: I0121 12:09:14.156218 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:09:15 crc kubenswrapper[4881]: I0121 12:09:15.326610 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ca80118-375e-4587-af3d-453c7aef306d" path="/var/lib/kubelet/pods/1ca80118-375e-4587-af3d-453c7aef306d/volumes" Jan 21 12:11:29 crc kubenswrapper[4881]: I0121 12:11:29.851611 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:11:29 crc kubenswrapper[4881]: I0121 12:11:29.852289 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:12:00 crc kubenswrapper[4881]: I0121 12:11:59.852393 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:12:00 crc kubenswrapper[4881]: I0121 12:11:59.853059 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:12:06 crc kubenswrapper[4881]: I0121 12:12:06.361491 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-7vz4j" podUID="0a051fc2-b6e4-463c-bb0a-b565d12b21b4" containerName="registry-server" probeResult="failure" output=< Jan 21 12:12:06 crc kubenswrapper[4881]: timeout: health rpc did not complete within 1s Jan 21 12:12:06 crc kubenswrapper[4881]: > Jan 21 12:12:29 crc kubenswrapper[4881]: I0121 12:12:29.851364 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:12:29 crc kubenswrapper[4881]: I0121 12:12:29.851988 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:12:29 crc kubenswrapper[4881]: I0121 12:12:29.852050 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:12:29 crc kubenswrapper[4881]: I0121 12:12:29.853085 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:12:29 crc kubenswrapper[4881]: I0121 12:12:29.853157 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" gracePeriod=600 Jan 21 12:12:29 crc kubenswrapper[4881]: E0121 12:12:29.985379 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:12:30 crc kubenswrapper[4881]: I0121 12:12:30.674451 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" exitCode=0 Jan 21 12:12:30 crc kubenswrapper[4881]: I0121 12:12:30.674677 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c"} Jan 21 12:12:30 crc kubenswrapper[4881]: I0121 12:12:30.675030 4881 scope.go:117] "RemoveContainer" containerID="8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318" Jan 21 12:12:30 crc kubenswrapper[4881]: I0121 12:12:30.676518 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:12:30 crc kubenswrapper[4881]: E0121 12:12:30.677406 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:12:45 crc kubenswrapper[4881]: I0121 12:12:45.315523 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:12:45 crc kubenswrapper[4881]: E0121 12:12:45.316405 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:12:56 crc kubenswrapper[4881]: I0121 12:12:56.313484 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:12:56 crc kubenswrapper[4881]: E0121 12:12:56.314761 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:13:10 crc kubenswrapper[4881]: I0121 12:13:10.395496 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:13:10 crc kubenswrapper[4881]: E0121 12:13:10.396271 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:13:13 crc kubenswrapper[4881]: I0121 12:13:13.767906 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 12:13:22 crc kubenswrapper[4881]: I0121 12:13:22.311869 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:13:22 crc kubenswrapper[4881]: E0121 12:13:22.313448 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:13:33 crc kubenswrapper[4881]: I0121 12:13:33.334150 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:13:33 crc kubenswrapper[4881]: E0121 12:13:33.335679 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:13:44 crc kubenswrapper[4881]: I0121 12:13:44.312811 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:13:44 crc kubenswrapper[4881]: E0121 12:13:44.313701 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:13:56 crc kubenswrapper[4881]: I0121 12:13:56.311705 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:13:56 crc kubenswrapper[4881]: E0121 12:13:56.312494 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:14:11 crc kubenswrapper[4881]: I0121 12:14:11.311047 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:14:11 crc kubenswrapper[4881]: E0121 12:14:11.311739 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:14:24 crc kubenswrapper[4881]: I0121 12:14:24.311945 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:14:24 crc kubenswrapper[4881]: E0121 12:14:24.312879 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:14:37 crc kubenswrapper[4881]: I0121 12:14:37.311422 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:14:37 crc kubenswrapper[4881]: E0121 12:14:37.312940 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:14:48 crc kubenswrapper[4881]: I0121 12:14:48.311430 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:14:48 crc kubenswrapper[4881]: E0121 12:14:48.312278 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:14:59 crc kubenswrapper[4881]: I0121 12:14:59.313864 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:14:59 crc kubenswrapper[4881]: E0121 12:14:59.314737 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.186895 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c"] Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187671 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="extract-utilities" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187700 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="extract-utilities" Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187714 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187720 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187742 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="extract-utilities" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187749 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="extract-utilities" Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187771 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="extract-content" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187777 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="extract-content" Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187786 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187793 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187833 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="extract-content" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187843 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="extract-content" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.188136 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.188158 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.189129 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.192453 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.192619 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.198936 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c"] Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.282306 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.282418 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82zd\" (UniqueName: \"kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.282452 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.385753 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.385938 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t82zd\" (UniqueName: \"kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.386004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.387356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.394310 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.406778 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t82zd\" (UniqueName: \"kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.508577 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:01 crc kubenswrapper[4881]: I0121 12:15:01.032637 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c"] Jan 21 12:15:01 crc kubenswrapper[4881]: I0121 12:15:01.190302 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" event={"ID":"22846423-24bd-4d85-b2da-a5c75401cd25","Type":"ContainerStarted","Data":"f965918bb02890baac237dc8df43e156a9095552fde727cfe31938539fdd3625"} Jan 21 12:15:02 crc kubenswrapper[4881]: I0121 12:15:02.203115 4881 generic.go:334] "Generic (PLEG): container finished" podID="22846423-24bd-4d85-b2da-a5c75401cd25" containerID="bf9af12b6f88ac7a2c2f3b75d58737d697a4cfe360d0edd4e874140a2c1b67eb" exitCode=0 Jan 21 12:15:02 crc kubenswrapper[4881]: I0121 12:15:02.203721 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" event={"ID":"22846423-24bd-4d85-b2da-a5c75401cd25","Type":"ContainerDied","Data":"bf9af12b6f88ac7a2c2f3b75d58737d697a4cfe360d0edd4e874140a2c1b67eb"} Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.654969 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.660851 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t82zd\" (UniqueName: \"kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd\") pod \"22846423-24bd-4d85-b2da-a5c75401cd25\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.660894 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume\") pod \"22846423-24bd-4d85-b2da-a5c75401cd25\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.660953 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume\") pod \"22846423-24bd-4d85-b2da-a5c75401cd25\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.661837 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume" (OuterVolumeSpecName: "config-volume") pod "22846423-24bd-4d85-b2da-a5c75401cd25" (UID: "22846423-24bd-4d85-b2da-a5c75401cd25"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.668662 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd" (OuterVolumeSpecName: "kube-api-access-t82zd") pod "22846423-24bd-4d85-b2da-a5c75401cd25" (UID: "22846423-24bd-4d85-b2da-a5c75401cd25"). InnerVolumeSpecName "kube-api-access-t82zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.671932 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "22846423-24bd-4d85-b2da-a5c75401cd25" (UID: "22846423-24bd-4d85-b2da-a5c75401cd25"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.763654 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t82zd\" (UniqueName: \"kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd\") on node \"crc\" DevicePath \"\"" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.763701 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.763715 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:15:04 crc kubenswrapper[4881]: I0121 12:15:04.228909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" event={"ID":"22846423-24bd-4d85-b2da-a5c75401cd25","Type":"ContainerDied","Data":"f965918bb02890baac237dc8df43e156a9095552fde727cfe31938539fdd3625"} Jan 21 12:15:04 crc kubenswrapper[4881]: I0121 12:15:04.229279 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f965918bb02890baac237dc8df43e156a9095552fde727cfe31938539fdd3625" Jan 21 12:15:04 crc kubenswrapper[4881]: I0121 12:15:04.229050 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:04 crc kubenswrapper[4881]: I0121 12:15:04.766243 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k"] Jan 21 12:15:04 crc kubenswrapper[4881]: I0121 12:15:04.779006 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k"] Jan 21 12:15:05 crc kubenswrapper[4881]: I0121 12:15:05.333177 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0563880c-563e-4cc5-93a0-c2af095788cb" path="/var/lib/kubelet/pods/0563880c-563e-4cc5-93a0-c2af095788cb/volumes" Jan 21 12:15:11 crc kubenswrapper[4881]: I0121 12:15:11.311425 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:15:11 crc kubenswrapper[4881]: E0121 12:15:11.312264 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:15:24 crc kubenswrapper[4881]: I0121 12:15:24.311633 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:15:24 crc kubenswrapper[4881]: E0121 12:15:24.312906 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:15:33 crc kubenswrapper[4881]: I0121 12:15:33.042566 4881 scope.go:117] "RemoveContainer" containerID="c97b0fba984ac7ac90aa9867ceabf4a4b1015c378fef6bf95655dcf59a8cdfd7" Jan 21 12:15:36 crc kubenswrapper[4881]: I0121 12:15:36.310618 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:15:36 crc kubenswrapper[4881]: E0121 12:15:36.311491 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:15:47 crc kubenswrapper[4881]: I0121 12:15:47.311182 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:15:47 crc kubenswrapper[4881]: E0121 12:15:47.312047 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:16:01 crc kubenswrapper[4881]: I0121 12:16:01.311486 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:16:01 crc kubenswrapper[4881]: E0121 12:16:01.313185 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:16:13 crc kubenswrapper[4881]: I0121 12:16:13.317573 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:16:13 crc kubenswrapper[4881]: E0121 12:16:13.318335 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:16:24 crc kubenswrapper[4881]: I0121 12:16:24.321498 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:16:24 crc kubenswrapper[4881]: E0121 12:16:24.323769 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:16:37 crc kubenswrapper[4881]: I0121 12:16:37.310927 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:16:37 crc kubenswrapper[4881]: E0121 12:16:37.314090 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:16:51 crc kubenswrapper[4881]: I0121 12:16:51.312064 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:16:51 crc kubenswrapper[4881]: E0121 12:16:51.312917 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.946199 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:02 crc kubenswrapper[4881]: E0121 12:17:02.949535 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22846423-24bd-4d85-b2da-a5c75401cd25" containerName="collect-profiles" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.949561 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="22846423-24bd-4d85-b2da-a5c75401cd25" containerName="collect-profiles" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.949841 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="22846423-24bd-4d85-b2da-a5c75401cd25" containerName="collect-profiles" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.951811 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.989458 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.996200 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.996299 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.996369 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98rg\" (UniqueName: \"kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.098646 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.099092 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.099268 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98rg\" (UniqueName: \"kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.099606 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.099663 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.118062 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98rg\" (UniqueName: \"kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.306657 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.348743 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:17:03 crc kubenswrapper[4881]: E0121 12:17:03.350121 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.943893 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:04 crc kubenswrapper[4881]: I0121 12:17:04.653521 4881 generic.go:334] "Generic (PLEG): container finished" podID="02f6c733-139c-44ae-8b73-a6e3057768be" containerID="a0b87011efc9f857e9c9b7e236d8b6a82ba7e871612d5ca1d16d6da3cb3149b9" exitCode=0 Jan 21 12:17:04 crc kubenswrapper[4881]: I0121 12:17:04.653575 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerDied","Data":"a0b87011efc9f857e9c9b7e236d8b6a82ba7e871612d5ca1d16d6da3cb3149b9"} Jan 21 12:17:04 crc kubenswrapper[4881]: I0121 12:17:04.653607 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerStarted","Data":"60e46a66d15f4fc424d916da6f3a3b1d0bc943c1977338a7d71a92b9ebcd7e0f"} Jan 21 12:17:04 crc kubenswrapper[4881]: I0121 12:17:04.656121 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:17:06 crc kubenswrapper[4881]: I0121 12:17:06.677615 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerStarted","Data":"ea75680997b7ad974c558c644f3582b50eefc713815cb4d9b60e64b010e20743"} Jan 21 12:17:07 crc kubenswrapper[4881]: I0121 12:17:07.688918 4881 generic.go:334] "Generic (PLEG): container finished" podID="02f6c733-139c-44ae-8b73-a6e3057768be" containerID="ea75680997b7ad974c558c644f3582b50eefc713815cb4d9b60e64b010e20743" exitCode=0 Jan 21 12:17:07 crc kubenswrapper[4881]: I0121 12:17:07.689004 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerDied","Data":"ea75680997b7ad974c558c644f3582b50eefc713815cb4d9b60e64b010e20743"} Jan 21 12:17:10 crc kubenswrapper[4881]: I0121 12:17:10.723110 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerStarted","Data":"efa4780df099e6fc0b25f85730952cac3a1da5dce78d5144ac0a9df0692a392d"} Jan 21 12:17:10 crc kubenswrapper[4881]: I0121 12:17:10.757995 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cpvbs" podStartSLOduration=3.56108408 podStartE2EDuration="8.757903736s" podCreationTimestamp="2026-01-21 12:17:02 +0000 UTC" firstStartedPulling="2026-01-21 12:17:04.655748062 +0000 UTC m=+4811.915704531" lastFinishedPulling="2026-01-21 12:17:09.852567708 +0000 UTC m=+4817.112524187" observedRunningTime="2026-01-21 12:17:10.741452829 +0000 UTC m=+4818.001409308" watchObservedRunningTime="2026-01-21 12:17:10.757903736 +0000 UTC m=+4818.017860225" Jan 21 12:17:13 crc kubenswrapper[4881]: I0121 12:17:13.308213 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:13 crc kubenswrapper[4881]: I0121 12:17:13.308710 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:13 crc kubenswrapper[4881]: I0121 12:17:13.426563 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:14 crc kubenswrapper[4881]: I0121 12:17:14.310907 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:17:14 crc kubenswrapper[4881]: E0121 12:17:14.312008 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:17:23 crc kubenswrapper[4881]: I0121 12:17:23.374421 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:23 crc kubenswrapper[4881]: I0121 12:17:23.429471 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:23 crc kubenswrapper[4881]: I0121 12:17:23.859825 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cpvbs" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="registry-server" containerID="cri-o://efa4780df099e6fc0b25f85730952cac3a1da5dce78d5144ac0a9df0692a392d" gracePeriod=2 Jan 21 12:17:24 crc kubenswrapper[4881]: I0121 12:17:24.880998 4881 generic.go:334] "Generic (PLEG): container finished" podID="02f6c733-139c-44ae-8b73-a6e3057768be" containerID="efa4780df099e6fc0b25f85730952cac3a1da5dce78d5144ac0a9df0692a392d" exitCode=0 Jan 21 12:17:24 crc kubenswrapper[4881]: I0121 12:17:24.881244 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerDied","Data":"efa4780df099e6fc0b25f85730952cac3a1da5dce78d5144ac0a9df0692a392d"} Jan 21 12:17:25 crc kubenswrapper[4881]: I0121 12:17:25.311007 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:17:25 crc kubenswrapper[4881]: E0121 12:17:25.311580 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.512977 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.561859 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities\") pod \"02f6c733-139c-44ae-8b73-a6e3057768be\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.561930 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content\") pod \"02f6c733-139c-44ae-8b73-a6e3057768be\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.562052 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v98rg\" (UniqueName: \"kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg\") pod \"02f6c733-139c-44ae-8b73-a6e3057768be\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.563047 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities" (OuterVolumeSpecName: "utilities") pod "02f6c733-139c-44ae-8b73-a6e3057768be" (UID: "02f6c733-139c-44ae-8b73-a6e3057768be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.575092 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg" (OuterVolumeSpecName: "kube-api-access-v98rg") pod "02f6c733-139c-44ae-8b73-a6e3057768be" (UID: "02f6c733-139c-44ae-8b73-a6e3057768be"). InnerVolumeSpecName "kube-api-access-v98rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.609276 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02f6c733-139c-44ae-8b73-a6e3057768be" (UID: "02f6c733-139c-44ae-8b73-a6e3057768be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.664195 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v98rg\" (UniqueName: \"kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg\") on node \"crc\" DevicePath \"\"" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.664235 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.664245 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.903031 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerDied","Data":"60e46a66d15f4fc424d916da6f3a3b1d0bc943c1977338a7d71a92b9ebcd7e0f"} Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.903098 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.903121 4881 scope.go:117] "RemoveContainer" containerID="efa4780df099e6fc0b25f85730952cac3a1da5dce78d5144ac0a9df0692a392d" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.933829 4881 scope.go:117] "RemoveContainer" containerID="ea75680997b7ad974c558c644f3582b50eefc713815cb4d9b60e64b010e20743" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.963858 4881 scope.go:117] "RemoveContainer" containerID="a0b87011efc9f857e9c9b7e236d8b6a82ba7e871612d5ca1d16d6da3cb3149b9" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.972106 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.989944 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:27 crc kubenswrapper[4881]: I0121 12:17:27.324711 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" path="/var/lib/kubelet/pods/02f6c733-139c-44ae-8b73-a6e3057768be/volumes" Jan 21 12:17:38 crc kubenswrapper[4881]: I0121 12:17:38.311050 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:17:39 crc kubenswrapper[4881]: I0121 12:17:39.109155 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923"} Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.424566 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:29 crc kubenswrapper[4881]: E0121 12:19:29.425724 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="extract-utilities" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.425744 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="extract-utilities" Jan 21 12:19:29 crc kubenswrapper[4881]: E0121 12:19:29.425756 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="registry-server" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.425764 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="registry-server" Jan 21 12:19:29 crc kubenswrapper[4881]: E0121 12:19:29.425781 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="extract-content" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.425814 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="extract-content" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.426075 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="registry-server" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.427596 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.442687 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.558837 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.559050 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.559396 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8knlw\" (UniqueName: \"kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.662850 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.663267 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8knlw\" (UniqueName: \"kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.663519 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.663518 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.666110 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.687322 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8knlw\" (UniqueName: \"kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.745807 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:30 crc kubenswrapper[4881]: I0121 12:19:30.274557 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:30 crc kubenswrapper[4881]: I0121 12:19:30.940579 4881 generic.go:334] "Generic (PLEG): container finished" podID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerID="140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9" exitCode=0 Jan 21 12:19:30 crc kubenswrapper[4881]: I0121 12:19:30.940667 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerDied","Data":"140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9"} Jan 21 12:19:30 crc kubenswrapper[4881]: I0121 12:19:30.941225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerStarted","Data":"00f9fc65f13c846fdde5e4ff3376ce54ecbfd5bbdaba0e6b34fd2b171a2ee7ea"} Jan 21 12:19:32 crc kubenswrapper[4881]: I0121 12:19:32.975302 4881 generic.go:334] "Generic (PLEG): container finished" podID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerID="0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6" exitCode=0 Jan 21 12:19:32 crc kubenswrapper[4881]: I0121 12:19:32.975493 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerDied","Data":"0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6"} Jan 21 12:19:33 crc kubenswrapper[4881]: I0121 12:19:33.987304 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerStarted","Data":"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a"} Jan 21 12:19:34 crc kubenswrapper[4881]: I0121 12:19:34.016143 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmj66" podStartSLOduration=2.558955824 podStartE2EDuration="5.016120888s" podCreationTimestamp="2026-01-21 12:19:29 +0000 UTC" firstStartedPulling="2026-01-21 12:19:30.943031362 +0000 UTC m=+4958.202987831" lastFinishedPulling="2026-01-21 12:19:33.400196426 +0000 UTC m=+4960.660152895" observedRunningTime="2026-01-21 12:19:34.003939799 +0000 UTC m=+4961.263896268" watchObservedRunningTime="2026-01-21 12:19:34.016120888 +0000 UTC m=+4961.276077357" Jan 21 12:19:39 crc kubenswrapper[4881]: I0121 12:19:39.746605 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:39 crc kubenswrapper[4881]: I0121 12:19:39.747162 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:39 crc kubenswrapper[4881]: I0121 12:19:39.797205 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:40 crc kubenswrapper[4881]: I0121 12:19:40.101334 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:40 crc kubenswrapper[4881]: I0121 12:19:40.160700 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.058225 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmj66" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="registry-server" containerID="cri-o://f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a" gracePeriod=2 Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.614002 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.783903 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8knlw\" (UniqueName: \"kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw\") pod \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.784002 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content\") pod \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.784292 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities\") pod \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.785718 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities" (OuterVolumeSpecName: "utilities") pod "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" (UID: "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.887594 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.924046 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" (UID: "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.991187 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.073211 4881 generic.go:334] "Generic (PLEG): container finished" podID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerID="f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a" exitCode=0 Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.073268 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerDied","Data":"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a"} Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.073307 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerDied","Data":"00f9fc65f13c846fdde5e4ff3376ce54ecbfd5bbdaba0e6b34fd2b171a2ee7ea"} Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.073334 4881 scope.go:117] "RemoveContainer" containerID="f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.073527 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.108687 4881 scope.go:117] "RemoveContainer" containerID="0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.271192 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw" (OuterVolumeSpecName: "kube-api-access-8knlw") pod "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" (UID: "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae"). InnerVolumeSpecName "kube-api-access-8knlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.290131 4881 scope.go:117] "RemoveContainer" containerID="140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.298753 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8knlw\" (UniqueName: \"kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw\") on node \"crc\" DevicePath \"\"" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.441928 4881 scope.go:117] "RemoveContainer" containerID="f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a" Jan 21 12:19:43 crc kubenswrapper[4881]: E0121 12:19:43.442488 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a\": container with ID starting with f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a not found: ID does not exist" containerID="f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.442555 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a"} err="failed to get container status \"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a\": rpc error: code = NotFound desc = could not find container \"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a\": container with ID starting with f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a not found: ID does not exist" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.442593 4881 scope.go:117] "RemoveContainer" containerID="0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6" Jan 21 12:19:43 crc kubenswrapper[4881]: E0121 12:19:43.443006 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6\": container with ID starting with 0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6 not found: ID does not exist" containerID="0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.443043 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6"} err="failed to get container status \"0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6\": rpc error: code = NotFound desc = could not find container \"0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6\": container with ID starting with 0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6 not found: ID does not exist" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.443068 4881 scope.go:117] "RemoveContainer" containerID="140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9" Jan 21 12:19:43 crc kubenswrapper[4881]: E0121 12:19:43.444956 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9\": container with ID starting with 140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9 not found: ID does not exist" containerID="140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.445005 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9"} err="failed to get container status \"140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9\": rpc error: code = NotFound desc = could not find container \"140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9\": container with ID starting with 140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9 not found: ID does not exist" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.500571 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.510657 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:45 crc kubenswrapper[4881]: I0121 12:19:45.325270 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" path="/var/lib/kubelet/pods/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae/volumes" Jan 21 12:19:59 crc kubenswrapper[4881]: I0121 12:19:59.851659 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:19:59 crc kubenswrapper[4881]: I0121 12:19:59.852394 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:20:29 crc kubenswrapper[4881]: I0121 12:20:29.851381 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:20:29 crc kubenswrapper[4881]: I0121 12:20:29.851939 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:20:59 crc kubenswrapper[4881]: I0121 12:20:59.851718 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:20:59 crc kubenswrapper[4881]: I0121 12:20:59.852353 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:20:59 crc kubenswrapper[4881]: I0121 12:20:59.852416 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:20:59 crc kubenswrapper[4881]: I0121 12:20:59.853375 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:20:59 crc kubenswrapper[4881]: I0121 12:20:59.853439 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923" gracePeriod=600 Jan 21 12:21:00 crc kubenswrapper[4881]: I0121 12:21:00.068740 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923" exitCode=0 Jan 21 12:21:00 crc kubenswrapper[4881]: I0121 12:21:00.068802 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923"} Jan 21 12:21:00 crc kubenswrapper[4881]: I0121 12:21:00.068843 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:21:02 crc kubenswrapper[4881]: I0121 12:21:02.094752 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0"} Jan 21 12:23:29 crc kubenswrapper[4881]: I0121 12:23:29.851119 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:23:29 crc kubenswrapper[4881]: I0121 12:23:29.851748 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:23:59 crc kubenswrapper[4881]: I0121 12:23:59.851711 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:23:59 crc kubenswrapper[4881]: I0121 12:23:59.852336 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.516245 4881 trace.go:236] Trace[298046428]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (21-Jan-2026 12:24:03.749) (total time: 6767ms): Jan 21 12:24:10 crc kubenswrapper[4881]: Trace[298046428]: [6.767087145s] [6.767087145s] END Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.768983 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:10 crc kubenswrapper[4881]: E0121 12:24:10.774099 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="registry-server" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.774156 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="registry-server" Jan 21 12:24:10 crc kubenswrapper[4881]: E0121 12:24:10.774237 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="extract-utilities" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.774246 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="extract-utilities" Jan 21 12:24:10 crc kubenswrapper[4881]: E0121 12:24:10.774261 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="extract-content" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.774269 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="extract-content" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.774746 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="registry-server" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.782143 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.797878 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.829178 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rsd\" (UniqueName: \"kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.829473 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.829720 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.930416 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.932119 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.932217 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rsd\" (UniqueName: \"kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.932277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.932853 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.933010 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.933307 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.945334 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.957684 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rsd\" (UniqueName: \"kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.035005 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.035121 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.035209 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x278n\" (UniqueName: \"kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.111314 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.143775 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.143892 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x278n\" (UniqueName: \"kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.144094 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.144989 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.147827 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.177720 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x278n\" (UniqueName: \"kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.258481 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.867393 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.965594 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:12 crc kubenswrapper[4881]: I0121 12:24:12.920257 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerStarted","Data":"55d842e7bd717974cdf52ec1477da5ecf0227134a5bbda5a2e4ccd1cb867fd3b"} Jan 21 12:24:12 crc kubenswrapper[4881]: I0121 12:24:12.940962 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerStarted","Data":"3bf40f3659d32a5324d6f5ded95c6c1fa84643efcf43ead247e37f6b81603f5f"} Jan 21 12:24:13 crc kubenswrapper[4881]: I0121 12:24:13.955230 4881 generic.go:334] "Generic (PLEG): container finished" podID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerID="37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee" exitCode=0 Jan 21 12:24:13 crc kubenswrapper[4881]: I0121 12:24:13.955292 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerDied","Data":"37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee"} Jan 21 12:24:13 crc kubenswrapper[4881]: I0121 12:24:13.958169 4881 generic.go:334] "Generic (PLEG): container finished" podID="19920016-1549-4841-b51a-4571079dfd12" containerID="e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888" exitCode=0 Jan 21 12:24:13 crc kubenswrapper[4881]: I0121 12:24:13.958219 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerDied","Data":"e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888"} Jan 21 12:24:13 crc kubenswrapper[4881]: I0121 12:24:13.958265 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:24:14 crc kubenswrapper[4881]: I0121 12:24:14.977260 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerStarted","Data":"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338"} Jan 21 12:24:16 crc kubenswrapper[4881]: I0121 12:24:16.227848 4881 generic.go:334] "Generic (PLEG): container finished" podID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerID="ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce" exitCode=0 Jan 21 12:24:16 crc kubenswrapper[4881]: I0121 12:24:16.230198 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerDied","Data":"ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce"} Jan 21 12:24:19 crc kubenswrapper[4881]: I0121 12:24:19.269499 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerStarted","Data":"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d"} Jan 21 12:24:19 crc kubenswrapper[4881]: I0121 12:24:19.300042 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rm5xm" podStartSLOduration=5.503195362 podStartE2EDuration="9.299998896s" podCreationTimestamp="2026-01-21 12:24:10 +0000 UTC" firstStartedPulling="2026-01-21 12:24:13.957833418 +0000 UTC m=+5241.217789887" lastFinishedPulling="2026-01-21 12:24:17.754636952 +0000 UTC m=+5245.014593421" observedRunningTime="2026-01-21 12:24:19.288034083 +0000 UTC m=+5246.547990562" watchObservedRunningTime="2026-01-21 12:24:19.299998896 +0000 UTC m=+5246.559955365" Jan 21 12:24:20 crc kubenswrapper[4881]: I0121 12:24:20.283012 4881 generic.go:334] "Generic (PLEG): container finished" podID="19920016-1549-4841-b51a-4571079dfd12" containerID="654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338" exitCode=0 Jan 21 12:24:20 crc kubenswrapper[4881]: I0121 12:24:20.283083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerDied","Data":"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338"} Jan 21 12:24:21 crc kubenswrapper[4881]: I0121 12:24:21.111522 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:21 crc kubenswrapper[4881]: I0121 12:24:21.111877 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:21 crc kubenswrapper[4881]: I0121 12:24:21.161922 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:21 crc kubenswrapper[4881]: I0121 12:24:21.295295 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerStarted","Data":"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa"} Jan 21 12:24:21 crc kubenswrapper[4881]: I0121 12:24:21.326307 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7696g" podStartSLOduration=4.518929928 podStartE2EDuration="11.326287337s" podCreationTimestamp="2026-01-21 12:24:10 +0000 UTC" firstStartedPulling="2026-01-21 12:24:13.960177356 +0000 UTC m=+5241.220133835" lastFinishedPulling="2026-01-21 12:24:20.767534775 +0000 UTC m=+5248.027491244" observedRunningTime="2026-01-21 12:24:21.323288634 +0000 UTC m=+5248.583245103" watchObservedRunningTime="2026-01-21 12:24:21.326287337 +0000 UTC m=+5248.586243806" Jan 21 12:24:29 crc kubenswrapper[4881]: I0121 12:24:29.850877 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:24:29 crc kubenswrapper[4881]: I0121 12:24:29.851565 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:24:29 crc kubenswrapper[4881]: I0121 12:24:29.851642 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:24:29 crc kubenswrapper[4881]: I0121 12:24:29.852946 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:24:29 crc kubenswrapper[4881]: I0121 12:24:29.853116 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" gracePeriod=600 Jan 21 12:24:29 crc kubenswrapper[4881]: E0121 12:24:29.980516 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:24:30 crc kubenswrapper[4881]: I0121 12:24:30.397684 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" exitCode=0 Jan 21 12:24:30 crc kubenswrapper[4881]: I0121 12:24:30.397983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0"} Jan 21 12:24:30 crc kubenswrapper[4881]: I0121 12:24:30.398322 4881 scope.go:117] "RemoveContainer" containerID="51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923" Jan 21 12:24:30 crc kubenswrapper[4881]: I0121 12:24:30.399414 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:24:30 crc kubenswrapper[4881]: E0121 12:24:30.400028 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.167643 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.228911 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.259062 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.259160 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.306935 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.409915 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rm5xm" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="registry-server" containerID="cri-o://cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d" gracePeriod=2 Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.461415 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.879531 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.072803 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content\") pod \"0ba62402-c750-4507-afb1-a4bc0cbb5659\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.073280 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8rsd\" (UniqueName: \"kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd\") pod \"0ba62402-c750-4507-afb1-a4bc0cbb5659\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.073427 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities\") pod \"0ba62402-c750-4507-afb1-a4bc0cbb5659\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.074138 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities" (OuterVolumeSpecName: "utilities") pod "0ba62402-c750-4507-afb1-a4bc0cbb5659" (UID: "0ba62402-c750-4507-afb1-a4bc0cbb5659"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.079432 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd" (OuterVolumeSpecName: "kube-api-access-p8rsd") pod "0ba62402-c750-4507-afb1-a4bc0cbb5659" (UID: "0ba62402-c750-4507-afb1-a4bc0cbb5659"). InnerVolumeSpecName "kube-api-access-p8rsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.110329 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ba62402-c750-4507-afb1-a4bc0cbb5659" (UID: "0ba62402-c750-4507-afb1-a4bc0cbb5659"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.175686 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.175720 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8rsd\" (UniqueName: \"kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.175733 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.424485 4881 generic.go:334] "Generic (PLEG): container finished" podID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerID="cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d" exitCode=0 Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.424587 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.424593 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerDied","Data":"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d"} Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.424657 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerDied","Data":"3bf40f3659d32a5324d6f5ded95c6c1fa84643efcf43ead247e37f6b81603f5f"} Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.424682 4881 scope.go:117] "RemoveContainer" containerID="cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.450282 4881 scope.go:117] "RemoveContainer" containerID="ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.479716 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.482333 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.501691 4881 scope.go:117] "RemoveContainer" containerID="37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.547200 4881 scope.go:117] "RemoveContainer" containerID="cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d" Jan 21 12:24:32 crc kubenswrapper[4881]: E0121 12:24:32.549958 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d\": container with ID starting with cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d not found: ID does not exist" containerID="cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.550046 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d"} err="failed to get container status \"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d\": rpc error: code = NotFound desc = could not find container \"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d\": container with ID starting with cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d not found: ID does not exist" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.550087 4881 scope.go:117] "RemoveContainer" containerID="ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce" Jan 21 12:24:32 crc kubenswrapper[4881]: E0121 12:24:32.550503 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce\": container with ID starting with ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce not found: ID does not exist" containerID="ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.550536 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce"} err="failed to get container status \"ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce\": rpc error: code = NotFound desc = could not find container \"ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce\": container with ID starting with ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce not found: ID does not exist" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.550559 4881 scope.go:117] "RemoveContainer" containerID="37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee" Jan 21 12:24:32 crc kubenswrapper[4881]: E0121 12:24:32.551020 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee\": container with ID starting with 37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee not found: ID does not exist" containerID="37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.551050 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee"} err="failed to get container status \"37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee\": rpc error: code = NotFound desc = could not find container \"37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee\": container with ID starting with 37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee not found: ID does not exist" Jan 21 12:24:33 crc kubenswrapper[4881]: I0121 12:24:33.327738 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" path="/var/lib/kubelet/pods/0ba62402-c750-4507-afb1-a4bc0cbb5659/volumes" Jan 21 12:24:33 crc kubenswrapper[4881]: I0121 12:24:33.615933 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:34 crc kubenswrapper[4881]: I0121 12:24:34.506610 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7696g" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="registry-server" containerID="cri-o://ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa" gracePeriod=2 Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.004277 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.110400 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x278n\" (UniqueName: \"kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n\") pod \"19920016-1549-4841-b51a-4571079dfd12\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.110461 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content\") pod \"19920016-1549-4841-b51a-4571079dfd12\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.110673 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities\") pod \"19920016-1549-4841-b51a-4571079dfd12\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.111698 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities" (OuterVolumeSpecName: "utilities") pod "19920016-1549-4841-b51a-4571079dfd12" (UID: "19920016-1549-4841-b51a-4571079dfd12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.118056 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n" (OuterVolumeSpecName: "kube-api-access-x278n") pod "19920016-1549-4841-b51a-4571079dfd12" (UID: "19920016-1549-4841-b51a-4571079dfd12"). InnerVolumeSpecName "kube-api-access-x278n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.214052 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x278n\" (UniqueName: \"kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.214101 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.242845 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19920016-1549-4841-b51a-4571079dfd12" (UID: "19920016-1549-4841-b51a-4571079dfd12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.316758 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.522953 4881 generic.go:334] "Generic (PLEG): container finished" podID="19920016-1549-4841-b51a-4571079dfd12" containerID="ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa" exitCode=0 Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.523030 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerDied","Data":"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa"} Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.523215 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerDied","Data":"55d842e7bd717974cdf52ec1477da5ecf0227134a5bbda5a2e4ccd1cb867fd3b"} Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.523099 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.523244 4881 scope.go:117] "RemoveContainer" containerID="ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.550902 4881 scope.go:117] "RemoveContainer" containerID="654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.557016 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.566744 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.577189 4881 scope.go:117] "RemoveContainer" containerID="e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.620023 4881 scope.go:117] "RemoveContainer" containerID="ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa" Jan 21 12:24:35 crc kubenswrapper[4881]: E0121 12:24:35.620561 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa\": container with ID starting with ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa not found: ID does not exist" containerID="ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.620607 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa"} err="failed to get container status \"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa\": rpc error: code = NotFound desc = could not find container \"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa\": container with ID starting with ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa not found: ID does not exist" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.620638 4881 scope.go:117] "RemoveContainer" containerID="654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338" Jan 21 12:24:35 crc kubenswrapper[4881]: E0121 12:24:35.621019 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338\": container with ID starting with 654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338 not found: ID does not exist" containerID="654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.621087 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338"} err="failed to get container status \"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338\": rpc error: code = NotFound desc = could not find container \"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338\": container with ID starting with 654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338 not found: ID does not exist" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.621121 4881 scope.go:117] "RemoveContainer" containerID="e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888" Jan 21 12:24:35 crc kubenswrapper[4881]: E0121 12:24:35.621455 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888\": container with ID starting with e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888 not found: ID does not exist" containerID="e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.621513 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888"} err="failed to get container status \"e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888\": rpc error: code = NotFound desc = could not find container \"e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888\": container with ID starting with e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888 not found: ID does not exist" Jan 21 12:24:37 crc kubenswrapper[4881]: I0121 12:24:37.324179 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19920016-1549-4841-b51a-4571079dfd12" path="/var/lib/kubelet/pods/19920016-1549-4841-b51a-4571079dfd12/volumes" Jan 21 12:24:44 crc kubenswrapper[4881]: I0121 12:24:44.312100 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:24:44 crc kubenswrapper[4881]: E0121 12:24:44.313287 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:24:55 crc kubenswrapper[4881]: I0121 12:24:55.310946 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:24:55 crc kubenswrapper[4881]: E0121 12:24:55.311843 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:25:08 crc kubenswrapper[4881]: I0121 12:25:08.311297 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:25:08 crc kubenswrapper[4881]: E0121 12:25:08.312300 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:25:23 crc kubenswrapper[4881]: I0121 12:25:23.323365 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:25:23 crc kubenswrapper[4881]: E0121 12:25:23.325522 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:25:38 crc kubenswrapper[4881]: I0121 12:25:38.310604 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:25:38 crc kubenswrapper[4881]: E0121 12:25:38.311683 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:25:49 crc kubenswrapper[4881]: I0121 12:25:49.311563 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:25:49 crc kubenswrapper[4881]: E0121 12:25:49.314054 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:26:01 crc kubenswrapper[4881]: I0121 12:26:01.311820 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:26:01 crc kubenswrapper[4881]: E0121 12:26:01.312683 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:26:13 crc kubenswrapper[4881]: I0121 12:26:13.320763 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:26:13 crc kubenswrapper[4881]: E0121 12:26:13.322768 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:26:27 crc kubenswrapper[4881]: I0121 12:26:27.311436 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:26:27 crc kubenswrapper[4881]: E0121 12:26:27.312383 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:26:39 crc kubenswrapper[4881]: I0121 12:26:39.311100 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:26:39 crc kubenswrapper[4881]: E0121 12:26:39.328088 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:26:52 crc kubenswrapper[4881]: I0121 12:26:52.312324 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:26:52 crc kubenswrapper[4881]: E0121 12:26:52.313243 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:27:03 crc kubenswrapper[4881]: I0121 12:27:03.335603 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:27:03 crc kubenswrapper[4881]: E0121 12:27:03.336833 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:27:14 crc kubenswrapper[4881]: I0121 12:27:14.311513 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:27:14 crc kubenswrapper[4881]: E0121 12:27:14.312385 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:27:25 crc kubenswrapper[4881]: I0121 12:27:25.311125 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:27:25 crc kubenswrapper[4881]: E0121 12:27:25.311949 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:27:39 crc kubenswrapper[4881]: I0121 12:27:39.311057 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:27:39 crc kubenswrapper[4881]: E0121 12:27:39.311923 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:27:52 crc kubenswrapper[4881]: I0121 12:27:52.311360 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:27:52 crc kubenswrapper[4881]: E0121 12:27:52.312125 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:28:04 crc kubenswrapper[4881]: I0121 12:28:04.313020 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:28:04 crc kubenswrapper[4881]: E0121 12:28:04.314150 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:28:16 crc kubenswrapper[4881]: I0121 12:28:16.311629 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:28:16 crc kubenswrapper[4881]: E0121 12:28:16.313476 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:28:29 crc kubenswrapper[4881]: I0121 12:28:29.312067 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:28:29 crc kubenswrapper[4881]: E0121 12:28:29.314200 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:28:40 crc kubenswrapper[4881]: I0121 12:28:40.312018 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:28:40 crc kubenswrapper[4881]: E0121 12:28:40.313007 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:28:53 crc kubenswrapper[4881]: I0121 12:28:53.321705 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:28:53 crc kubenswrapper[4881]: E0121 12:28:53.323131 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:29:06 crc kubenswrapper[4881]: I0121 12:29:06.310928 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:29:06 crc kubenswrapper[4881]: E0121 12:29:06.311719 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:29:20 crc kubenswrapper[4881]: I0121 12:29:20.311096 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:29:20 crc kubenswrapper[4881]: E0121 12:29:20.312985 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:29:34 crc kubenswrapper[4881]: I0121 12:29:34.312571 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:29:34 crc kubenswrapper[4881]: I0121 12:29:34.783544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f"} Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.260681 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g"] Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261764 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="extract-content" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261802 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="extract-content" Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261832 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="extract-content" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261840 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="extract-content" Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261870 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="extract-utilities" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261880 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="extract-utilities" Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261895 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261903 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261929 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261936 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261954 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="extract-utilities" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261964 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="extract-utilities" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.262220 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.262239 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.263217 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.263707 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g"] Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.266210 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.268026 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.445160 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.446819 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wfd4\" (UniqueName: \"kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.446980 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.548940 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.549416 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wfd4\" (UniqueName: \"kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.549524 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.550836 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.557905 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.572085 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wfd4\" (UniqueName: \"kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.585156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:01 crc kubenswrapper[4881]: I0121 12:30:01.148684 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g"] Jan 21 12:30:01 crc kubenswrapper[4881]: I0121 12:30:01.201389 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" event={"ID":"5368d7c4-a23a-46aa-8dea-1fde26f5df53","Type":"ContainerStarted","Data":"4164bf5d4bc11259b2f83b181016e2372dbf6f746c00a5e5d99d2c9e0c84bec1"} Jan 21 12:30:02 crc kubenswrapper[4881]: I0121 12:30:02.213387 4881 generic.go:334] "Generic (PLEG): container finished" podID="5368d7c4-a23a-46aa-8dea-1fde26f5df53" containerID="b60782b6ad5aeb71531d28ab48543fd988c6726bf0975c069d2238cd6237f3ab" exitCode=0 Jan 21 12:30:02 crc kubenswrapper[4881]: I0121 12:30:02.213559 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" event={"ID":"5368d7c4-a23a-46aa-8dea-1fde26f5df53","Type":"ContainerDied","Data":"b60782b6ad5aeb71531d28ab48543fd988c6726bf0975c069d2238cd6237f3ab"} Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.690856 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.820869 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wfd4\" (UniqueName: \"kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4\") pod \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.821069 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume\") pod \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.821149 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume\") pod \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.822366 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume" (OuterVolumeSpecName: "config-volume") pod "5368d7c4-a23a-46aa-8dea-1fde26f5df53" (UID: "5368d7c4-a23a-46aa-8dea-1fde26f5df53"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.828893 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4" (OuterVolumeSpecName: "kube-api-access-7wfd4") pod "5368d7c4-a23a-46aa-8dea-1fde26f5df53" (UID: "5368d7c4-a23a-46aa-8dea-1fde26f5df53"). InnerVolumeSpecName "kube-api-access-7wfd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.831133 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5368d7c4-a23a-46aa-8dea-1fde26f5df53" (UID: "5368d7c4-a23a-46aa-8dea-1fde26f5df53"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.924150 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.924214 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wfd4\" (UniqueName: \"kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4\") on node \"crc\" DevicePath \"\"" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.924232 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:30:04 crc kubenswrapper[4881]: I0121 12:30:04.239977 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" event={"ID":"5368d7c4-a23a-46aa-8dea-1fde26f5df53","Type":"ContainerDied","Data":"4164bf5d4bc11259b2f83b181016e2372dbf6f746c00a5e5d99d2c9e0c84bec1"} Jan 21 12:30:04 crc kubenswrapper[4881]: I0121 12:30:04.240026 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4164bf5d4bc11259b2f83b181016e2372dbf6f746c00a5e5d99d2c9e0c84bec1" Jan 21 12:30:04 crc kubenswrapper[4881]: I0121 12:30:04.240102 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:04 crc kubenswrapper[4881]: I0121 12:30:04.779771 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk"] Jan 21 12:30:04 crc kubenswrapper[4881]: I0121 12:30:04.796056 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk"] Jan 21 12:30:05 crc kubenswrapper[4881]: I0121 12:30:05.325955 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49387e54-5709-46bd-9f76-cd79369d9abe" path="/var/lib/kubelet/pods/49387e54-5709-46bd-9f76-cd79369d9abe/volumes" Jan 21 12:30:33 crc kubenswrapper[4881]: I0121 12:30:33.632697 4881 scope.go:117] "RemoveContainer" containerID="03feba2a29229654c706a38fc1bff6c4df03df1eca6406a125ce3ee72913286b" Jan 21 12:31:25 crc kubenswrapper[4881]: I0121 12:31:25.495017 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" podUID="a194c95e-cbcb-4d7e-a631-d4a14989e985" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:31:25 crc kubenswrapper[4881]: I0121 12:31:25.495037 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" podUID="a194c95e-cbcb-4d7e-a631-d4a14989e985" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:31:59 crc kubenswrapper[4881]: I0121 12:31:59.851086 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:31:59 crc kubenswrapper[4881]: I0121 12:31:59.851591 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:32:29 crc kubenswrapper[4881]: I0121 12:32:29.851462 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:32:29 crc kubenswrapper[4881]: I0121 12:32:29.852418 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:32:59 crc kubenswrapper[4881]: I0121 12:32:59.851254 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:32:59 crc kubenswrapper[4881]: I0121 12:32:59.851873 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:32:59 crc kubenswrapper[4881]: I0121 12:32:59.851970 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:32:59 crc kubenswrapper[4881]: I0121 12:32:59.852961 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:32:59 crc kubenswrapper[4881]: I0121 12:32:59.853160 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f" gracePeriod=600 Jan 21 12:33:00 crc kubenswrapper[4881]: I0121 12:33:00.951816 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f" exitCode=0 Jan 21 12:33:00 crc kubenswrapper[4881]: I0121 12:33:00.951938 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f"} Jan 21 12:33:00 crc kubenswrapper[4881]: I0121 12:33:00.952337 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3"} Jan 21 12:33:00 crc kubenswrapper[4881]: I0121 12:33:00.952409 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.307290 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:19 crc kubenswrapper[4881]: E0121 12:34:19.309783 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5368d7c4-a23a-46aa-8dea-1fde26f5df53" containerName="collect-profiles" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.309909 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5368d7c4-a23a-46aa-8dea-1fde26f5df53" containerName="collect-profiles" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.310266 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5368d7c4-a23a-46aa-8dea-1fde26f5df53" containerName="collect-profiles" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.312632 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.354451 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.354856 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhbl\" (UniqueName: \"kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.355194 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.386148 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.456991 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.457097 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drhbl\" (UniqueName: \"kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.457177 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.458161 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.458820 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.581628 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drhbl\" (UniqueName: \"kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.653933 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:20 crc kubenswrapper[4881]: I0121 12:34:20.222145 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:20 crc kubenswrapper[4881]: I0121 12:34:20.850061 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e7922fd-c90d-44be-924c-961055910625" containerID="aa00cccb6838c50f5a442f098a65d1d05eece62c87a5fbef804e514a756f7f64" exitCode=0 Jan 21 12:34:20 crc kubenswrapper[4881]: I0121 12:34:20.850163 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerDied","Data":"aa00cccb6838c50f5a442f098a65d1d05eece62c87a5fbef804e514a756f7f64"} Jan 21 12:34:20 crc kubenswrapper[4881]: I0121 12:34:20.850327 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerStarted","Data":"f64906d1b9120521632645eb3d6bcfdbd7f3e7bb5868aa2f3886549c679f4f5f"} Jan 21 12:34:20 crc kubenswrapper[4881]: I0121 12:34:20.854359 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:34:22 crc kubenswrapper[4881]: I0121 12:34:22.873588 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e7922fd-c90d-44be-924c-961055910625" containerID="bb827d0c521f710737507c54a40bb3151f05c5326264b9c349f57dd2e400b8ee" exitCode=0 Jan 21 12:34:22 crc kubenswrapper[4881]: I0121 12:34:22.873666 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerDied","Data":"bb827d0c521f710737507c54a40bb3151f05c5326264b9c349f57dd2e400b8ee"} Jan 21 12:34:23 crc kubenswrapper[4881]: I0121 12:34:23.887047 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerStarted","Data":"aac79371442850cad994c9e4cb25b92c5cb4ef8f3e9e7cbd47ed7dc0f33169a3"} Jan 21 12:34:23 crc kubenswrapper[4881]: I0121 12:34:23.916766 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vspzs" podStartSLOduration=2.504747499 podStartE2EDuration="4.916697727s" podCreationTimestamp="2026-01-21 12:34:19 +0000 UTC" firstStartedPulling="2026-01-21 12:34:20.853531074 +0000 UTC m=+5848.113487583" lastFinishedPulling="2026-01-21 12:34:23.265481342 +0000 UTC m=+5850.525437811" observedRunningTime="2026-01-21 12:34:23.905613935 +0000 UTC m=+5851.165570414" watchObservedRunningTime="2026-01-21 12:34:23.916697727 +0000 UTC m=+5851.176654196" Jan 21 12:34:29 crc kubenswrapper[4881]: I0121 12:34:29.654749 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:29 crc kubenswrapper[4881]: I0121 12:34:29.655339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:29 crc kubenswrapper[4881]: I0121 12:34:29.706861 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:29 crc kubenswrapper[4881]: I0121 12:34:29.993712 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:30 crc kubenswrapper[4881]: I0121 12:34:30.049074 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:31 crc kubenswrapper[4881]: I0121 12:34:31.963242 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vspzs" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="registry-server" containerID="cri-o://aac79371442850cad994c9e4cb25b92c5cb4ef8f3e9e7cbd47ed7dc0f33169a3" gracePeriod=2 Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.034931 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e7922fd-c90d-44be-924c-961055910625" containerID="aac79371442850cad994c9e4cb25b92c5cb4ef8f3e9e7cbd47ed7dc0f33169a3" exitCode=0 Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.035038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerDied","Data":"aac79371442850cad994c9e4cb25b92c5cb4ef8f3e9e7cbd47ed7dc0f33169a3"} Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.286387 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.378726 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content\") pod \"6e7922fd-c90d-44be-924c-961055910625\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.378854 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drhbl\" (UniqueName: \"kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl\") pod \"6e7922fd-c90d-44be-924c-961055910625\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.378934 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities\") pod \"6e7922fd-c90d-44be-924c-961055910625\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.380264 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities" (OuterVolumeSpecName: "utilities") pod "6e7922fd-c90d-44be-924c-961055910625" (UID: "6e7922fd-c90d-44be-924c-961055910625"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.389734 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl" (OuterVolumeSpecName: "kube-api-access-drhbl") pod "6e7922fd-c90d-44be-924c-961055910625" (UID: "6e7922fd-c90d-44be-924c-961055910625"). InnerVolumeSpecName "kube-api-access-drhbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.412067 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e7922fd-c90d-44be-924c-961055910625" (UID: "6e7922fd-c90d-44be-924c-961055910625"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.481304 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.481339 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drhbl\" (UniqueName: \"kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.481348 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.049111 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerDied","Data":"f64906d1b9120521632645eb3d6bcfdbd7f3e7bb5868aa2f3886549c679f4f5f"} Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.049945 4881 scope.go:117] "RemoveContainer" containerID="aac79371442850cad994c9e4cb25b92c5cb4ef8f3e9e7cbd47ed7dc0f33169a3" Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.049235 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.082044 4881 scope.go:117] "RemoveContainer" containerID="bb827d0c521f710737507c54a40bb3151f05c5326264b9c349f57dd2e400b8ee" Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.108119 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.118524 4881 scope.go:117] "RemoveContainer" containerID="aa00cccb6838c50f5a442f098a65d1d05eece62c87a5fbef804e514a756f7f64" Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.119149 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:35 crc kubenswrapper[4881]: I0121 12:34:35.323615 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7922fd-c90d-44be-924c-961055910625" path="/var/lib/kubelet/pods/6e7922fd-c90d-44be-924c-961055910625/volumes" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.337761 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:34:58 crc kubenswrapper[4881]: E0121 12:34:58.339961 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="extract-utilities" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.340061 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="extract-utilities" Jan 21 12:34:58 crc kubenswrapper[4881]: E0121 12:34:58.340158 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="extract-content" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.340261 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="extract-content" Jan 21 12:34:58 crc kubenswrapper[4881]: E0121 12:34:58.340353 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="registry-server" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.340429 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="registry-server" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.340809 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="registry-server" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.343064 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.353486 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.491200 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.491592 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmf9k\" (UniqueName: \"kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.491944 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.594600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.594671 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmf9k\" (UniqueName: \"kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.594773 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.595396 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.595537 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.615903 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmf9k\" (UniqueName: \"kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.716541 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:59 crc kubenswrapper[4881]: I0121 12:34:59.246182 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:34:59 crc kubenswrapper[4881]: I0121 12:34:59.404585 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerStarted","Data":"cb2b1cbb1fbd26965587ad7d26030f5cf1d51c84e4e2def7ab4d1253a5497981"} Jan 21 12:35:00 crc kubenswrapper[4881]: I0121 12:35:00.365025 4881 generic.go:334] "Generic (PLEG): container finished" podID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerID="398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc" exitCode=0 Jan 21 12:35:00 crc kubenswrapper[4881]: I0121 12:35:00.365173 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerDied","Data":"398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc"} Jan 21 12:35:02 crc kubenswrapper[4881]: I0121 12:35:02.417282 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerStarted","Data":"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619"} Jan 21 12:35:05 crc kubenswrapper[4881]: I0121 12:35:05.610703 4881 generic.go:334] "Generic (PLEG): container finished" podID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerID="1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619" exitCode=0 Jan 21 12:35:05 crc kubenswrapper[4881]: I0121 12:35:05.610890 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerDied","Data":"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619"} Jan 21 12:35:07 crc kubenswrapper[4881]: I0121 12:35:07.635804 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerStarted","Data":"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323"} Jan 21 12:35:07 crc kubenswrapper[4881]: I0121 12:35:07.662223 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x8pdp" podStartSLOduration=3.285319415 podStartE2EDuration="9.662196441s" podCreationTimestamp="2026-01-21 12:34:58 +0000 UTC" firstStartedPulling="2026-01-21 12:35:00.367149038 +0000 UTC m=+5887.627105507" lastFinishedPulling="2026-01-21 12:35:06.744026044 +0000 UTC m=+5894.003982533" observedRunningTime="2026-01-21 12:35:07.660021727 +0000 UTC m=+5894.919978246" watchObservedRunningTime="2026-01-21 12:35:07.662196441 +0000 UTC m=+5894.922152950" Jan 21 12:35:08 crc kubenswrapper[4881]: I0121 12:35:08.717325 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:08 crc kubenswrapper[4881]: I0121 12:35:08.717705 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:09 crc kubenswrapper[4881]: I0121 12:35:09.815536 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8pdp" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="registry-server" probeResult="failure" output=< Jan 21 12:35:09 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 12:35:09 crc kubenswrapper[4881]: > Jan 21 12:35:18 crc kubenswrapper[4881]: I0121 12:35:18.766186 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:18 crc kubenswrapper[4881]: I0121 12:35:18.860622 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:19 crc kubenswrapper[4881]: I0121 12:35:19.007123 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:35:20 crc kubenswrapper[4881]: I0121 12:35:20.778975 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x8pdp" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="registry-server" containerID="cri-o://819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323" gracePeriod=2 Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.411868 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.438554 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities\") pod \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.438659 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content\") pod \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.438701 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmf9k\" (UniqueName: \"kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k\") pod \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.439810 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities" (OuterVolumeSpecName: "utilities") pod "134cc2ce-d598-4f3e-8e4d-0d52621fa050" (UID: "134cc2ce-d598-4f3e-8e4d-0d52621fa050"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.449352 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k" (OuterVolumeSpecName: "kube-api-access-cmf9k") pod "134cc2ce-d598-4f3e-8e4d-0d52621fa050" (UID: "134cc2ce-d598-4f3e-8e4d-0d52621fa050"). InnerVolumeSpecName "kube-api-access-cmf9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.540459 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.540497 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmf9k\" (UniqueName: \"kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k\") on node \"crc\" DevicePath \"\"" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.585858 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "134cc2ce-d598-4f3e-8e4d-0d52621fa050" (UID: "134cc2ce-d598-4f3e-8e4d-0d52621fa050"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.642525 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.794665 4881 generic.go:334] "Generic (PLEG): container finished" podID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerID="819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323" exitCode=0 Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.794715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerDied","Data":"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323"} Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.794749 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerDied","Data":"cb2b1cbb1fbd26965587ad7d26030f5cf1d51c84e4e2def7ab4d1253a5497981"} Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.794765 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.794774 4881 scope.go:117] "RemoveContainer" containerID="819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.825277 4881 scope.go:117] "RemoveContainer" containerID="1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.846012 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.853720 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.860962 4881 scope.go:117] "RemoveContainer" containerID="398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.912635 4881 scope.go:117] "RemoveContainer" containerID="819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323" Jan 21 12:35:21 crc kubenswrapper[4881]: E0121 12:35:21.913132 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323\": container with ID starting with 819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323 not found: ID does not exist" containerID="819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.913187 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323"} err="failed to get container status \"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323\": rpc error: code = NotFound desc = could not find container \"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323\": container with ID starting with 819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323 not found: ID does not exist" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.913222 4881 scope.go:117] "RemoveContainer" containerID="1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619" Jan 21 12:35:21 crc kubenswrapper[4881]: E0121 12:35:21.913651 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619\": container with ID starting with 1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619 not found: ID does not exist" containerID="1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.913682 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619"} err="failed to get container status \"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619\": rpc error: code = NotFound desc = could not find container \"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619\": container with ID starting with 1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619 not found: ID does not exist" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.913704 4881 scope.go:117] "RemoveContainer" containerID="398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc" Jan 21 12:35:21 crc kubenswrapper[4881]: E0121 12:35:21.913939 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc\": container with ID starting with 398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc not found: ID does not exist" containerID="398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.913961 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc"} err="failed to get container status \"398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc\": rpc error: code = NotFound desc = could not find container \"398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc\": container with ID starting with 398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc not found: ID does not exist" Jan 21 12:35:23 crc kubenswrapper[4881]: I0121 12:35:23.328085 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" path="/var/lib/kubelet/pods/134cc2ce-d598-4f3e-8e4d-0d52621fa050/volumes" Jan 21 12:35:29 crc kubenswrapper[4881]: I0121 12:35:29.850738 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:35:29 crc kubenswrapper[4881]: I0121 12:35:29.851205 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:35:59 crc kubenswrapper[4881]: I0121 12:35:59.850692 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:35:59 crc kubenswrapper[4881]: I0121 12:35:59.851267 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:36:29 crc kubenswrapper[4881]: I0121 12:36:29.851404 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:36:29 crc kubenswrapper[4881]: I0121 12:36:29.853632 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:36:29 crc kubenswrapper[4881]: I0121 12:36:29.853839 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:36:29 crc kubenswrapper[4881]: I0121 12:36:29.854763 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:36:29 crc kubenswrapper[4881]: I0121 12:36:29.855034 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" gracePeriod=600 Jan 21 12:36:29 crc kubenswrapper[4881]: E0121 12:36:29.977451 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:36:30 crc kubenswrapper[4881]: I0121 12:36:30.701064 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" exitCode=0 Jan 21 12:36:30 crc kubenswrapper[4881]: I0121 12:36:30.701083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3"} Jan 21 12:36:30 crc kubenswrapper[4881]: I0121 12:36:30.701198 4881 scope.go:117] "RemoveContainer" containerID="5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f" Jan 21 12:36:30 crc kubenswrapper[4881]: I0121 12:36:30.702122 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:36:30 crc kubenswrapper[4881]: E0121 12:36:30.703085 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.507520 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:34 crc kubenswrapper[4881]: E0121 12:36:34.508587 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="extract-content" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.508602 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="extract-content" Jan 21 12:36:34 crc kubenswrapper[4881]: E0121 12:36:34.508614 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="registry-server" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.508620 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="registry-server" Jan 21 12:36:34 crc kubenswrapper[4881]: E0121 12:36:34.508630 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="extract-utilities" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.508637 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="extract-utilities" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.508918 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="registry-server" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.510953 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.519719 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.635207 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jtm\" (UniqueName: \"kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.635440 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.635851 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.737975 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.738115 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.738294 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7jtm\" (UniqueName: \"kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.738567 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.738763 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.758712 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7jtm\" (UniqueName: \"kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.835170 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:35 crc kubenswrapper[4881]: I0121 12:36:35.354572 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:35 crc kubenswrapper[4881]: I0121 12:36:35.774558 4881 generic.go:334] "Generic (PLEG): container finished" podID="8ef74d66-0c28-4544-849f-27a618c07f25" containerID="94358427c0b7aad8c60ccf1f15d3a5bdd6fe48a1d0ce0fffd39e8e43512aae28" exitCode=0 Jan 21 12:36:35 crc kubenswrapper[4881]: I0121 12:36:35.774649 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerDied","Data":"94358427c0b7aad8c60ccf1f15d3a5bdd6fe48a1d0ce0fffd39e8e43512aae28"} Jan 21 12:36:35 crc kubenswrapper[4881]: I0121 12:36:35.775093 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerStarted","Data":"7c991126e180b43f8ed8051ea2a401c78bffed23d0b8cb311f41cf189fbd2dfa"} Jan 21 12:36:36 crc kubenswrapper[4881]: I0121 12:36:36.789212 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerStarted","Data":"d37189c03972c86a5249beff3ff66068254eecbcbd8f696c02ec91aab34478d7"} Jan 21 12:36:37 crc kubenswrapper[4881]: I0121 12:36:37.801487 4881 generic.go:334] "Generic (PLEG): container finished" podID="8ef74d66-0c28-4544-849f-27a618c07f25" containerID="d37189c03972c86a5249beff3ff66068254eecbcbd8f696c02ec91aab34478d7" exitCode=0 Jan 21 12:36:37 crc kubenswrapper[4881]: I0121 12:36:37.801732 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerDied","Data":"d37189c03972c86a5249beff3ff66068254eecbcbd8f696c02ec91aab34478d7"} Jan 21 12:36:38 crc kubenswrapper[4881]: I0121 12:36:38.816713 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerStarted","Data":"b23b18c80bd46c2b1574da5ddf36ca2de500862eaba1c7c8da6864b9043b3793"} Jan 21 12:36:38 crc kubenswrapper[4881]: I0121 12:36:38.849599 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-56jq6" podStartSLOduration=2.414744082 podStartE2EDuration="4.849570889s" podCreationTimestamp="2026-01-21 12:36:34 +0000 UTC" firstStartedPulling="2026-01-21 12:36:35.778198326 +0000 UTC m=+5983.038154805" lastFinishedPulling="2026-01-21 12:36:38.213025143 +0000 UTC m=+5985.472981612" observedRunningTime="2026-01-21 12:36:38.838924019 +0000 UTC m=+5986.098880498" watchObservedRunningTime="2026-01-21 12:36:38.849570889 +0000 UTC m=+5986.109527368" Jan 21 12:36:44 crc kubenswrapper[4881]: I0121 12:36:44.311817 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:36:44 crc kubenswrapper[4881]: E0121 12:36:44.312936 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:36:44 crc kubenswrapper[4881]: I0121 12:36:44.836231 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:44 crc kubenswrapper[4881]: I0121 12:36:44.836307 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:44 crc kubenswrapper[4881]: I0121 12:36:44.884637 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:44 crc kubenswrapper[4881]: I0121 12:36:44.934282 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:45 crc kubenswrapper[4881]: I0121 12:36:45.131330 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:46 crc kubenswrapper[4881]: I0121 12:36:46.902482 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-56jq6" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="registry-server" containerID="cri-o://b23b18c80bd46c2b1574da5ddf36ca2de500862eaba1c7c8da6864b9043b3793" gracePeriod=2 Jan 21 12:36:47 crc kubenswrapper[4881]: I0121 12:36:47.919932 4881 generic.go:334] "Generic (PLEG): container finished" podID="8ef74d66-0c28-4544-849f-27a618c07f25" containerID="b23b18c80bd46c2b1574da5ddf36ca2de500862eaba1c7c8da6864b9043b3793" exitCode=0 Jan 21 12:36:47 crc kubenswrapper[4881]: I0121 12:36:47.920006 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerDied","Data":"b23b18c80bd46c2b1574da5ddf36ca2de500862eaba1c7c8da6864b9043b3793"} Jan 21 12:36:47 crc kubenswrapper[4881]: I0121 12:36:47.920517 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerDied","Data":"7c991126e180b43f8ed8051ea2a401c78bffed23d0b8cb311f41cf189fbd2dfa"} Jan 21 12:36:47 crc kubenswrapper[4881]: I0121 12:36:47.920537 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c991126e180b43f8ed8051ea2a401c78bffed23d0b8cb311f41cf189fbd2dfa" Jan 21 12:36:47 crc kubenswrapper[4881]: I0121 12:36:47.922175 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.086207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities\") pod \"8ef74d66-0c28-4544-849f-27a618c07f25\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.086448 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content\") pod \"8ef74d66-0c28-4544-849f-27a618c07f25\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.086600 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7jtm\" (UniqueName: \"kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm\") pod \"8ef74d66-0c28-4544-849f-27a618c07f25\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.087251 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities" (OuterVolumeSpecName: "utilities") pod "8ef74d66-0c28-4544-849f-27a618c07f25" (UID: "8ef74d66-0c28-4544-849f-27a618c07f25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.092633 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm" (OuterVolumeSpecName: "kube-api-access-g7jtm") pod "8ef74d66-0c28-4544-849f-27a618c07f25" (UID: "8ef74d66-0c28-4544-849f-27a618c07f25"). InnerVolumeSpecName "kube-api-access-g7jtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.142851 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ef74d66-0c28-4544-849f-27a618c07f25" (UID: "8ef74d66-0c28-4544-849f-27a618c07f25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.188955 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7jtm\" (UniqueName: \"kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm\") on node \"crc\" DevicePath \"\"" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.188983 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.188993 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.938858 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.998976 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:49 crc kubenswrapper[4881]: I0121 12:36:49.007043 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:49 crc kubenswrapper[4881]: I0121 12:36:49.324239 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" path="/var/lib/kubelet/pods/8ef74d66-0c28-4544-849f-27a618c07f25/volumes" Jan 21 12:36:57 crc kubenswrapper[4881]: I0121 12:36:57.312259 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:36:57 crc kubenswrapper[4881]: E0121 12:36:57.315298 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:37:09 crc kubenswrapper[4881]: I0121 12:37:09.568208 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-nfs-0" podUID="8c912ca5-a82b-4083-8579-f0f6f506eebb" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.1.6:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:37:11 crc kubenswrapper[4881]: I0121 12:37:11.323229 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:37:11 crc kubenswrapper[4881]: E0121 12:37:11.338627 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:37:22 crc kubenswrapper[4881]: I0121 12:37:22.311038 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:37:22 crc kubenswrapper[4881]: E0121 12:37:22.311956 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:37:37 crc kubenswrapper[4881]: I0121 12:37:37.311830 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:37:37 crc kubenswrapper[4881]: E0121 12:37:37.313163 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:37:48 crc kubenswrapper[4881]: I0121 12:37:48.311186 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:37:48 crc kubenswrapper[4881]: E0121 12:37:48.312781 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:37:51 crc kubenswrapper[4881]: I0121 12:37:51.928011 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" podUID="d0cafd1d-5f37-499a-a531-547a137aae21" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:38:03 crc kubenswrapper[4881]: I0121 12:38:03.322847 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:38:03 crc kubenswrapper[4881]: E0121 12:38:03.323769 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:38:17 crc kubenswrapper[4881]: I0121 12:38:17.311447 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:38:17 crc kubenswrapper[4881]: E0121 12:38:17.312666 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:38:30 crc kubenswrapper[4881]: I0121 12:38:30.310516 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:38:30 crc kubenswrapper[4881]: E0121 12:38:30.312641 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:38:43 crc kubenswrapper[4881]: I0121 12:38:43.319732 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:38:43 crc kubenswrapper[4881]: E0121 12:38:43.320651 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:38:54 crc kubenswrapper[4881]: I0121 12:38:54.312824 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:38:54 crc kubenswrapper[4881]: E0121 12:38:54.314937 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:39:05 crc kubenswrapper[4881]: I0121 12:39:05.311593 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:39:05 crc kubenswrapper[4881]: E0121 12:39:05.312988 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.230654 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:06 crc kubenswrapper[4881]: E0121 12:39:06.231596 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="extract-utilities" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.231633 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="extract-utilities" Jan 21 12:39:06 crc kubenswrapper[4881]: E0121 12:39:06.231676 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="registry-server" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.231689 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="registry-server" Jan 21 12:39:06 crc kubenswrapper[4881]: E0121 12:39:06.231733 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="extract-content" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.231751 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="extract-content" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.232178 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="registry-server" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.235090 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.245937 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.442129 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjw5\" (UniqueName: \"kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.442548 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.443486 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.544266 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.544415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.544478 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjw5\" (UniqueName: \"kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.544852 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.544986 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.562544 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjw5\" (UniqueName: \"kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.570162 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:07 crc kubenswrapper[4881]: I0121 12:39:07.146882 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:07 crc kubenswrapper[4881]: I0121 12:39:07.702748 4881 generic.go:334] "Generic (PLEG): container finished" podID="7456574a-75d3-47a1-a584-c552d4806d47" containerID="74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61" exitCode=0 Jan 21 12:39:07 crc kubenswrapper[4881]: I0121 12:39:07.702807 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerDied","Data":"74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61"} Jan 21 12:39:07 crc kubenswrapper[4881]: I0121 12:39:07.703023 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerStarted","Data":"ba52beca2b7b22a072d2fac530ed6a3181fc174a60547351bd072b0dd6060fd0"} Jan 21 12:39:08 crc kubenswrapper[4881]: I0121 12:39:08.720883 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerStarted","Data":"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92"} Jan 21 12:39:09 crc kubenswrapper[4881]: I0121 12:39:09.734601 4881 generic.go:334] "Generic (PLEG): container finished" podID="7456574a-75d3-47a1-a584-c552d4806d47" containerID="8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92" exitCode=0 Jan 21 12:39:09 crc kubenswrapper[4881]: I0121 12:39:09.734675 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerDied","Data":"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92"} Jan 21 12:39:10 crc kubenswrapper[4881]: I0121 12:39:10.747899 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerStarted","Data":"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca"} Jan 21 12:39:10 crc kubenswrapper[4881]: I0121 12:39:10.779470 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s6n4b" podStartSLOduration=2.29504583 podStartE2EDuration="4.779443121s" podCreationTimestamp="2026-01-21 12:39:06 +0000 UTC" firstStartedPulling="2026-01-21 12:39:07.704744586 +0000 UTC m=+6134.964701045" lastFinishedPulling="2026-01-21 12:39:10.189141867 +0000 UTC m=+6137.449098336" observedRunningTime="2026-01-21 12:39:10.772890501 +0000 UTC m=+6138.032847070" watchObservedRunningTime="2026-01-21 12:39:10.779443121 +0000 UTC m=+6138.039399630" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.311510 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:39:16 crc kubenswrapper[4881]: E0121 12:39:16.312832 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.571023 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.571081 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.623158 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.862614 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.922493 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:18 crc kubenswrapper[4881]: I0121 12:39:18.835835 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s6n4b" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="registry-server" containerID="cri-o://2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca" gracePeriod=2 Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.395972 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.507876 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities\") pod \"7456574a-75d3-47a1-a584-c552d4806d47\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.508097 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfjw5\" (UniqueName: \"kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5\") pod \"7456574a-75d3-47a1-a584-c552d4806d47\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.508123 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content\") pod \"7456574a-75d3-47a1-a584-c552d4806d47\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.508627 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities" (OuterVolumeSpecName: "utilities") pod "7456574a-75d3-47a1-a584-c552d4806d47" (UID: "7456574a-75d3-47a1-a584-c552d4806d47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.513900 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5" (OuterVolumeSpecName: "kube-api-access-wfjw5") pod "7456574a-75d3-47a1-a584-c552d4806d47" (UID: "7456574a-75d3-47a1-a584-c552d4806d47"). InnerVolumeSpecName "kube-api-access-wfjw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.590540 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7456574a-75d3-47a1-a584-c552d4806d47" (UID: "7456574a-75d3-47a1-a584-c552d4806d47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.610743 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfjw5\" (UniqueName: \"kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5\") on node \"crc\" DevicePath \"\"" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.610775 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.610803 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.846960 4881 generic.go:334] "Generic (PLEG): container finished" podID="7456574a-75d3-47a1-a584-c552d4806d47" containerID="2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca" exitCode=0 Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.847014 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerDied","Data":"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca"} Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.847021 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.847056 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerDied","Data":"ba52beca2b7b22a072d2fac530ed6a3181fc174a60547351bd072b0dd6060fd0"} Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.847081 4881 scope.go:117] "RemoveContainer" containerID="2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.881030 4881 scope.go:117] "RemoveContainer" containerID="8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.908577 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.920247 4881 scope.go:117] "RemoveContainer" containerID="74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.923983 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.969836 4881 scope.go:117] "RemoveContainer" containerID="2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca" Jan 21 12:39:19 crc kubenswrapper[4881]: E0121 12:39:19.970629 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca\": container with ID starting with 2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca not found: ID does not exist" containerID="2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.970692 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca"} err="failed to get container status \"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca\": rpc error: code = NotFound desc = could not find container \"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca\": container with ID starting with 2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca not found: ID does not exist" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.970731 4881 scope.go:117] "RemoveContainer" containerID="8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92" Jan 21 12:39:19 crc kubenswrapper[4881]: E0121 12:39:19.971380 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92\": container with ID starting with 8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92 not found: ID does not exist" containerID="8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.971453 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92"} err="failed to get container status \"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92\": rpc error: code = NotFound desc = could not find container \"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92\": container with ID starting with 8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92 not found: ID does not exist" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.971497 4881 scope.go:117] "RemoveContainer" containerID="74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61" Jan 21 12:39:19 crc kubenswrapper[4881]: E0121 12:39:19.971944 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61\": container with ID starting with 74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61 not found: ID does not exist" containerID="74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.971986 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61"} err="failed to get container status \"74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61\": rpc error: code = NotFound desc = could not find container \"74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61\": container with ID starting with 74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61 not found: ID does not exist" Jan 21 12:39:21 crc kubenswrapper[4881]: I0121 12:39:21.324635 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7456574a-75d3-47a1-a584-c552d4806d47" path="/var/lib/kubelet/pods/7456574a-75d3-47a1-a584-c552d4806d47/volumes" Jan 21 12:39:30 crc kubenswrapper[4881]: I0121 12:39:30.313501 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:39:30 crc kubenswrapper[4881]: E0121 12:39:30.314964 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:39:42 crc kubenswrapper[4881]: I0121 12:39:42.311518 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:39:42 crc kubenswrapper[4881]: E0121 12:39:42.312759 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:39:54 crc kubenswrapper[4881]: I0121 12:39:54.311530 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:39:54 crc kubenswrapper[4881]: E0121 12:39:54.312341 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:40:09 crc kubenswrapper[4881]: I0121 12:40:09.311139 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:40:09 crc kubenswrapper[4881]: E0121 12:40:09.311976 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:40:22 crc kubenswrapper[4881]: I0121 12:40:22.311221 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:40:22 crc kubenswrapper[4881]: E0121 12:40:22.313807 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:40:35 crc kubenswrapper[4881]: I0121 12:40:35.310630 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:40:35 crc kubenswrapper[4881]: E0121 12:40:35.311647 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:40:50 crc kubenswrapper[4881]: I0121 12:40:50.311266 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:40:50 crc kubenswrapper[4881]: E0121 12:40:50.312012 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:41:02 crc kubenswrapper[4881]: I0121 12:41:02.311981 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:41:02 crc kubenswrapper[4881]: E0121 12:41:02.313821 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:41:13 crc kubenswrapper[4881]: I0121 12:41:13.322694 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:41:13 crc kubenswrapper[4881]: E0121 12:41:13.323619 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:41:25 crc kubenswrapper[4881]: I0121 12:41:25.322375 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:41:25 crc kubenswrapper[4881]: E0121 12:41:25.323753 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:41:36 crc kubenswrapper[4881]: I0121 12:41:36.311561 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:41:36 crc kubenswrapper[4881]: I0121 12:41:36.704700 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c"} Jan 21 12:43:34 crc kubenswrapper[4881]: I0121 12:43:34.468879 4881 scope.go:117] "RemoveContainer" containerID="d37189c03972c86a5249beff3ff66068254eecbcbd8f696c02ec91aab34478d7" Jan 21 12:43:34 crc kubenswrapper[4881]: I0121 12:43:34.495667 4881 scope.go:117] "RemoveContainer" containerID="94358427c0b7aad8c60ccf1f15d3a5bdd6fe48a1d0ce0fffd39e8e43512aae28" Jan 21 12:43:34 crc kubenswrapper[4881]: I0121 12:43:34.555265 4881 scope.go:117] "RemoveContainer" containerID="b23b18c80bd46c2b1574da5ddf36ca2de500862eaba1c7c8da6864b9043b3793" Jan 21 12:43:59 crc kubenswrapper[4881]: I0121 12:43:59.850694 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:43:59 crc kubenswrapper[4881]: I0121 12:43:59.851345 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:44:29 crc kubenswrapper[4881]: I0121 12:44:29.851630 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:44:29 crc kubenswrapper[4881]: I0121 12:44:29.853995 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:44:59 crc kubenswrapper[4881]: I0121 12:44:59.851504 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:44:59 crc kubenswrapper[4881]: I0121 12:44:59.852320 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:44:59 crc kubenswrapper[4881]: I0121 12:44:59.852455 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:44:59 crc kubenswrapper[4881]: I0121 12:44:59.854139 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:44:59 crc kubenswrapper[4881]: I0121 12:44:59.854333 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c" gracePeriod=600 Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.161896 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8"] Jan 21 12:45:00 crc kubenswrapper[4881]: E0121 12:45:00.162677 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="extract-content" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.162698 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="extract-content" Jan 21 12:45:00 crc kubenswrapper[4881]: E0121 12:45:00.162733 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="registry-server" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.162740 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="registry-server" Jan 21 12:45:00 crc kubenswrapper[4881]: E0121 12:45:00.162756 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="extract-utilities" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.162764 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="extract-utilities" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.162992 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="registry-server" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.164794 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.167894 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.169742 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.188995 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8"] Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.273832 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.273904 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7jk6\" (UniqueName: \"kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.274211 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.377188 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.377354 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.377391 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7jk6\" (UniqueName: \"kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.379840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.390371 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.410319 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7jk6\" (UniqueName: \"kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.512160 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.712723 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c" exitCode=0 Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.712776 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c"} Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.713032 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379"} Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.713060 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:45:01 crc kubenswrapper[4881]: I0121 12:45:01.030745 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8"] Jan 21 12:45:01 crc kubenswrapper[4881]: W0121 12:45:01.033820 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode92a1004_4ae7_4c9f_8ed8_1cb1a78dd2b7.slice/crio-488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1 WatchSource:0}: Error finding container 488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1: Status 404 returned error can't find the container with id 488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1 Jan 21 12:45:01 crc kubenswrapper[4881]: I0121 12:45:01.723943 4881 generic.go:334] "Generic (PLEG): container finished" podID="e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" containerID="77513d54cf4d9f5496abf1ce9933fa0d7aa3da0530b4c165a7c1ed70ba94b89c" exitCode=0 Jan 21 12:45:01 crc kubenswrapper[4881]: I0121 12:45:01.724010 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" event={"ID":"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7","Type":"ContainerDied","Data":"77513d54cf4d9f5496abf1ce9933fa0d7aa3da0530b4c165a7c1ed70ba94b89c"} Jan 21 12:45:01 crc kubenswrapper[4881]: I0121 12:45:01.724223 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" event={"ID":"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7","Type":"ContainerStarted","Data":"488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1"} Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.142670 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.250229 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume\") pod \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.250283 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7jk6\" (UniqueName: \"kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6\") pod \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.250336 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume\") pod \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.251749 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume" (OuterVolumeSpecName: "config-volume") pod "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" (UID: "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.262661 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" (UID: "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.264189 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6" (OuterVolumeSpecName: "kube-api-access-h7jk6") pod "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" (UID: "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7"). InnerVolumeSpecName "kube-api-access-h7jk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.352969 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.353009 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7jk6\" (UniqueName: \"kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.353025 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.754717 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" event={"ID":"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7","Type":"ContainerDied","Data":"488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1"} Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.754971 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.755229 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:04 crc kubenswrapper[4881]: I0121 12:45:04.245084 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn"] Jan 21 12:45:04 crc kubenswrapper[4881]: I0121 12:45:04.261667 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn"] Jan 21 12:45:05 crc kubenswrapper[4881]: I0121 12:45:05.329576 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e74d3023-7ad9-4e65-9627-cc8127927f6b" path="/var/lib/kubelet/pods/e74d3023-7ad9-4e65-9627-cc8127927f6b/volumes" Jan 21 12:45:34 crc kubenswrapper[4881]: I0121 12:45:34.671304 4881 scope.go:117] "RemoveContainer" containerID="f4fa32143b4e9e742c21ea98ab2bdc72498265c13850a532b1a72e716a34316a" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.759827 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:45:53 crc kubenswrapper[4881]: E0121 12:45:53.761010 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" containerName="collect-profiles" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.761030 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" containerName="collect-profiles" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.761322 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" containerName="collect-profiles" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.763267 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.791061 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.791209 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.791268 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxw6z\" (UniqueName: \"kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.795627 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.895191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.895343 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.895398 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxw6z\" (UniqueName: \"kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.896067 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.896449 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.931815 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxw6z\" (UniqueName: \"kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:54 crc kubenswrapper[4881]: I0121 12:45:54.101164 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:54 crc kubenswrapper[4881]: I0121 12:45:54.624411 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:45:55 crc kubenswrapper[4881]: I0121 12:45:55.365323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerStarted","Data":"2774ac01c095d3eaca53dacf6b3eab5a5a87e1e1faa5a2c821e90ca5b599bf28"} Jan 21 12:45:58 crc kubenswrapper[4881]: I0121 12:45:58.401881 4881 generic.go:334] "Generic (PLEG): container finished" podID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerID="ff08bbee0e9fe86ebc38c20b8b828d04cc2bec5f3aceb31f9921a64da8bf75af" exitCode=0 Jan 21 12:45:58 crc kubenswrapper[4881]: I0121 12:45:58.401951 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerDied","Data":"ff08bbee0e9fe86ebc38c20b8b828d04cc2bec5f3aceb31f9921a64da8bf75af"} Jan 21 12:45:58 crc kubenswrapper[4881]: I0121 12:45:58.407722 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:46:03 crc kubenswrapper[4881]: I0121 12:46:03.462071 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerStarted","Data":"fa19ae670e3e4e727e7a1290bfa09bdb19f3eed248af5fd0ee01f8baea3b1081"} Jan 21 12:46:10 crc kubenswrapper[4881]: I0121 12:46:10.613708 4881 generic.go:334] "Generic (PLEG): container finished" podID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerID="fa19ae670e3e4e727e7a1290bfa09bdb19f3eed248af5fd0ee01f8baea3b1081" exitCode=0 Jan 21 12:46:10 crc kubenswrapper[4881]: I0121 12:46:10.613892 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerDied","Data":"fa19ae670e3e4e727e7a1290bfa09bdb19f3eed248af5fd0ee01f8baea3b1081"} Jan 21 12:46:15 crc kubenswrapper[4881]: I0121 12:46:15.665067 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerStarted","Data":"c7f5ad7d69a3d6952b116d16a27812356bde7f39581517bea0004391a6c274a4"} Jan 21 12:46:15 crc kubenswrapper[4881]: I0121 12:46:15.694133 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jr6dv" podStartSLOduration=6.6348925560000005 podStartE2EDuration="22.69409097s" podCreationTimestamp="2026-01-21 12:45:53 +0000 UTC" firstStartedPulling="2026-01-21 12:45:58.407263948 +0000 UTC m=+6545.667220417" lastFinishedPulling="2026-01-21 12:46:14.466462342 +0000 UTC m=+6561.726418831" observedRunningTime="2026-01-21 12:46:15.687175272 +0000 UTC m=+6562.947131751" watchObservedRunningTime="2026-01-21 12:46:15.69409097 +0000 UTC m=+6562.954047439" Jan 21 12:46:24 crc kubenswrapper[4881]: I0121 12:46:24.101834 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:24 crc kubenswrapper[4881]: I0121 12:46:24.102360 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:24 crc kubenswrapper[4881]: I0121 12:46:24.190804 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:24 crc kubenswrapper[4881]: I0121 12:46:24.812987 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:24 crc kubenswrapper[4881]: I0121 12:46:24.956736 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:46:26 crc kubenswrapper[4881]: I0121 12:46:26.778100 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jr6dv" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="registry-server" containerID="cri-o://c7f5ad7d69a3d6952b116d16a27812356bde7f39581517bea0004391a6c274a4" gracePeriod=2 Jan 21 12:46:27 crc kubenswrapper[4881]: I0121 12:46:27.794762 4881 generic.go:334] "Generic (PLEG): container finished" podID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerID="c7f5ad7d69a3d6952b116d16a27812356bde7f39581517bea0004391a6c274a4" exitCode=0 Jan 21 12:46:27 crc kubenswrapper[4881]: I0121 12:46:27.794813 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerDied","Data":"c7f5ad7d69a3d6952b116d16a27812356bde7f39581517bea0004391a6c274a4"} Jan 21 12:46:27 crc kubenswrapper[4881]: I0121 12:46:27.920915 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.056447 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content\") pod \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.056774 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxw6z\" (UniqueName: \"kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z\") pod \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.057116 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities\") pod \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.058145 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities" (OuterVolumeSpecName: "utilities") pod "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" (UID: "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.067265 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z" (OuterVolumeSpecName: "kube-api-access-mxw6z") pod "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" (UID: "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6"). InnerVolumeSpecName "kube-api-access-mxw6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.160394 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.160441 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxw6z\" (UniqueName: \"kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z\") on node \"crc\" DevicePath \"\"" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.191583 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" (UID: "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.264828 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.810627 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerDied","Data":"2774ac01c095d3eaca53dacf6b3eab5a5a87e1e1faa5a2c821e90ca5b599bf28"} Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.810701 4881 scope.go:117] "RemoveContainer" containerID="c7f5ad7d69a3d6952b116d16a27812356bde7f39581517bea0004391a6c274a4" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.811826 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.855543 4881 scope.go:117] "RemoveContainer" containerID="fa19ae670e3e4e727e7a1290bfa09bdb19f3eed248af5fd0ee01f8baea3b1081" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.900367 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.917726 4881 scope.go:117] "RemoveContainer" containerID="ff08bbee0e9fe86ebc38c20b8b828d04cc2bec5f3aceb31f9921a64da8bf75af" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.920165 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:46:29 crc kubenswrapper[4881]: I0121 12:46:29.334630 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" path="/var/lib/kubelet/pods/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6/volumes" Jan 21 12:47:29 crc kubenswrapper[4881]: I0121 12:47:29.851073 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:47:29 crc kubenswrapper[4881]: I0121 12:47:29.851892 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:47:59 crc kubenswrapper[4881]: I0121 12:47:59.851141 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:47:59 crc kubenswrapper[4881]: I0121 12:47:59.851666 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:48:29 crc kubenswrapper[4881]: I0121 12:48:29.851199 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:48:29 crc kubenswrapper[4881]: I0121 12:48:29.852082 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:48:29 crc kubenswrapper[4881]: I0121 12:48:29.852157 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:48:29 crc kubenswrapper[4881]: I0121 12:48:29.853093 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:48:29 crc kubenswrapper[4881]: I0121 12:48:29.853199 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" gracePeriod=600 Jan 21 12:48:29 crc kubenswrapper[4881]: E0121 12:48:29.978360 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:48:30 crc kubenswrapper[4881]: I0121 12:48:30.585260 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" exitCode=0 Jan 21 12:48:30 crc kubenswrapper[4881]: I0121 12:48:30.585329 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379"} Jan 21 12:48:30 crc kubenswrapper[4881]: I0121 12:48:30.585634 4881 scope.go:117] "RemoveContainer" containerID="171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c" Jan 21 12:48:30 crc kubenswrapper[4881]: I0121 12:48:30.586529 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:48:30 crc kubenswrapper[4881]: E0121 12:48:30.586916 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:48:45 crc kubenswrapper[4881]: I0121 12:48:45.310865 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:48:45 crc kubenswrapper[4881]: E0121 12:48:45.311896 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:48:59 crc kubenswrapper[4881]: I0121 12:48:59.312032 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:48:59 crc kubenswrapper[4881]: E0121 12:48:59.313029 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:49:13 crc kubenswrapper[4881]: I0121 12:49:13.318966 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:49:13 crc kubenswrapper[4881]: E0121 12:49:13.319959 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:49:26 crc kubenswrapper[4881]: I0121 12:49:26.310893 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:49:26 crc kubenswrapper[4881]: E0121 12:49:26.311871 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:49:39 crc kubenswrapper[4881]: I0121 12:49:39.311613 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:49:39 crc kubenswrapper[4881]: E0121 12:49:39.313251 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:49:52 crc kubenswrapper[4881]: I0121 12:49:52.311308 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:49:52 crc kubenswrapper[4881]: E0121 12:49:52.312214 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:50:04 crc kubenswrapper[4881]: I0121 12:50:04.313443 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:50:04 crc kubenswrapper[4881]: E0121 12:50:04.314819 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:50:17 crc kubenswrapper[4881]: I0121 12:50:17.311146 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:50:17 crc kubenswrapper[4881]: E0121 12:50:17.312277 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:50:28 crc kubenswrapper[4881]: I0121 12:50:28.310557 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:50:28 crc kubenswrapper[4881]: E0121 12:50:28.311260 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:50:41 crc kubenswrapper[4881]: I0121 12:50:41.312047 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:50:41 crc kubenswrapper[4881]: E0121 12:50:41.313315 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:50:56 crc kubenswrapper[4881]: I0121 12:50:56.310873 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:50:56 crc kubenswrapper[4881]: E0121 12:50:56.311560 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:51:10 crc kubenswrapper[4881]: I0121 12:51:10.567651 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:51:10 crc kubenswrapper[4881]: E0121 12:51:10.568571 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:51:24 crc kubenswrapper[4881]: I0121 12:51:24.311736 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:51:24 crc kubenswrapper[4881]: E0121 12:51:24.314294 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:51:38 crc kubenswrapper[4881]: I0121 12:51:38.311003 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:51:38 crc kubenswrapper[4881]: E0121 12:51:38.311862 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:51:51 crc kubenswrapper[4881]: I0121 12:51:51.311138 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:51:51 crc kubenswrapper[4881]: E0121 12:51:51.312338 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:52:04 crc kubenswrapper[4881]: I0121 12:52:04.314555 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:52:04 crc kubenswrapper[4881]: E0121 12:52:04.329941 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:52:15 crc kubenswrapper[4881]: I0121 12:52:15.311016 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:52:15 crc kubenswrapper[4881]: E0121 12:52:15.311700 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:52:29 crc kubenswrapper[4881]: I0121 12:52:29.319262 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:52:29 crc kubenswrapper[4881]: E0121 12:52:29.321321 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:52:42 crc kubenswrapper[4881]: I0121 12:52:42.311329 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:52:42 crc kubenswrapper[4881]: E0121 12:52:42.312276 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:52:56 crc kubenswrapper[4881]: I0121 12:52:56.311373 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:52:56 crc kubenswrapper[4881]: E0121 12:52:56.312324 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:53:07 crc kubenswrapper[4881]: I0121 12:53:07.310948 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:53:07 crc kubenswrapper[4881]: E0121 12:53:07.312080 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:53:22 crc kubenswrapper[4881]: I0121 12:53:22.311263 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:53:22 crc kubenswrapper[4881]: E0121 12:53:22.312086 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:53:34 crc kubenswrapper[4881]: I0121 12:53:34.311369 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:53:35 crc kubenswrapper[4881]: I0121 12:53:35.152719 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035"} Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.741549 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:36 crc kubenswrapper[4881]: E0121 12:53:36.742510 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="extract-utilities" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.742525 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="extract-utilities" Jan 21 12:53:36 crc kubenswrapper[4881]: E0121 12:53:36.742563 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="registry-server" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.742571 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="registry-server" Jan 21 12:53:36 crc kubenswrapper[4881]: E0121 12:53:36.742584 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="extract-content" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.742590 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="extract-content" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.742816 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="registry-server" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.744367 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.756652 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.796230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.796274 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdlnm\" (UniqueName: \"kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.796363 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.899021 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.899094 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdlnm\" (UniqueName: \"kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.899190 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.899822 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.899840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.919719 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdlnm\" (UniqueName: \"kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:37 crc kubenswrapper[4881]: I0121 12:53:37.102447 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:37 crc kubenswrapper[4881]: I0121 12:53:37.660972 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:38 crc kubenswrapper[4881]: I0121 12:53:38.200542 4881 generic.go:334] "Generic (PLEG): container finished" podID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerID="f2776602f795a7d0db90491031dc97999d1b185014b08b5f9c8ef36a6686ca71" exitCode=0 Jan 21 12:53:38 crc kubenswrapper[4881]: I0121 12:53:38.200606 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerDied","Data":"f2776602f795a7d0db90491031dc97999d1b185014b08b5f9c8ef36a6686ca71"} Jan 21 12:53:38 crc kubenswrapper[4881]: I0121 12:53:38.200900 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerStarted","Data":"58441862b2c9de7ece6e1d2b0436d4ef9e5c2e523eb21c92187a1291d8b4e708"} Jan 21 12:53:38 crc kubenswrapper[4881]: I0121 12:53:38.203471 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:53:39 crc kubenswrapper[4881]: I0121 12:53:39.211775 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerStarted","Data":"46b8246f1ddbb9722b2104b7ec9ef1f064fac477d33848aa9a101199d9fce4e0"} Jan 21 12:53:40 crc kubenswrapper[4881]: I0121 12:53:40.225140 4881 generic.go:334] "Generic (PLEG): container finished" podID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerID="46b8246f1ddbb9722b2104b7ec9ef1f064fac477d33848aa9a101199d9fce4e0" exitCode=0 Jan 21 12:53:40 crc kubenswrapper[4881]: I0121 12:53:40.225207 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerDied","Data":"46b8246f1ddbb9722b2104b7ec9ef1f064fac477d33848aa9a101199d9fce4e0"} Jan 21 12:53:41 crc kubenswrapper[4881]: I0121 12:53:41.237514 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerStarted","Data":"fe23b6146a83222aa15f4a6ed582038505ddc15dfc14a0c48277a51346feb485"} Jan 21 12:53:41 crc kubenswrapper[4881]: I0121 12:53:41.268490 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mtgbb" podStartSLOduration=2.760782728 podStartE2EDuration="5.268453768s" podCreationTimestamp="2026-01-21 12:53:36 +0000 UTC" firstStartedPulling="2026-01-21 12:53:38.203046886 +0000 UTC m=+7005.463003375" lastFinishedPulling="2026-01-21 12:53:40.710717956 +0000 UTC m=+7007.970674415" observedRunningTime="2026-01-21 12:53:41.259814229 +0000 UTC m=+7008.519770708" watchObservedRunningTime="2026-01-21 12:53:41.268453768 +0000 UTC m=+7008.528410237" Jan 21 12:53:47 crc kubenswrapper[4881]: I0121 12:53:47.103091 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:47 crc kubenswrapper[4881]: I0121 12:53:47.103654 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:47 crc kubenswrapper[4881]: I0121 12:53:47.178760 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:47 crc kubenswrapper[4881]: I0121 12:53:47.352945 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:47 crc kubenswrapper[4881]: I0121 12:53:47.418714 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:49 crc kubenswrapper[4881]: I0121 12:53:49.318480 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mtgbb" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="registry-server" containerID="cri-o://fe23b6146a83222aa15f4a6ed582038505ddc15dfc14a0c48277a51346feb485" gracePeriod=2 Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.332829 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerDied","Data":"fe23b6146a83222aa15f4a6ed582038505ddc15dfc14a0c48277a51346feb485"} Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.332890 4881 generic.go:334] "Generic (PLEG): container finished" podID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerID="fe23b6146a83222aa15f4a6ed582038505ddc15dfc14a0c48277a51346feb485" exitCode=0 Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.490111 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.663701 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdlnm\" (UniqueName: \"kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm\") pod \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.663995 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities\") pod \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.664073 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content\") pod \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.664884 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities" (OuterVolumeSpecName: "utilities") pod "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" (UID: "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.671133 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm" (OuterVolumeSpecName: "kube-api-access-qdlnm") pod "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" (UID: "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d"). InnerVolumeSpecName "kube-api-access-qdlnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.698483 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" (UID: "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.766770 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.766844 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdlnm\" (UniqueName: \"kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm\") on node \"crc\" DevicePath \"\"" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.766856 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.347363 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerDied","Data":"58441862b2c9de7ece6e1d2b0436d4ef9e5c2e523eb21c92187a1291d8b4e708"} Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.347433 4881 scope.go:117] "RemoveContainer" containerID="fe23b6146a83222aa15f4a6ed582038505ddc15dfc14a0c48277a51346feb485" Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.347473 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.378623 4881 scope.go:117] "RemoveContainer" containerID="46b8246f1ddbb9722b2104b7ec9ef1f064fac477d33848aa9a101199d9fce4e0" Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.379353 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.394623 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.400512 4881 scope.go:117] "RemoveContainer" containerID="f2776602f795a7d0db90491031dc97999d1b185014b08b5f9c8ef36a6686ca71" Jan 21 12:53:51 crc kubenswrapper[4881]: E0121 12:53:51.561063 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd01b3ca1_cd4e_42fa_ab27_811b3d2ab26d.slice/crio-58441862b2c9de7ece6e1d2b0436d4ef9e5c2e523eb21c92187a1291d8b4e708\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd01b3ca1_cd4e_42fa_ab27_811b3d2ab26d.slice\": RecentStats: unable to find data in memory cache]" Jan 21 12:53:53 crc kubenswrapper[4881]: I0121 12:53:53.324008 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" path="/var/lib/kubelet/pods/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d/volumes" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.458846 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:22 crc kubenswrapper[4881]: E0121 12:55:22.460093 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="registry-server" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.460110 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="registry-server" Jan 21 12:55:22 crc kubenswrapper[4881]: E0121 12:55:22.460139 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="extract-utilities" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.460148 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="extract-utilities" Jan 21 12:55:22 crc kubenswrapper[4881]: E0121 12:55:22.460171 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="extract-content" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.460179 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="extract-content" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.460493 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="registry-server" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.462645 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.472185 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.519476 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.519820 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhcmm\" (UniqueName: \"kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.520030 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.622522 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.622651 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.622680 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhcmm\" (UniqueName: \"kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.623650 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.623886 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.646778 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhcmm\" (UniqueName: \"kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.788668 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:23 crc kubenswrapper[4881]: I0121 12:55:23.429299 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:23 crc kubenswrapper[4881]: I0121 12:55:23.525990 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerStarted","Data":"11ebae3a888dec65882526f80a7bb92025588e02e26de48fd6e89454edb9d249"} Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.538131 4881 generic.go:334] "Generic (PLEG): container finished" podID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerID="466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907" exitCode=0 Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.538260 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerDied","Data":"466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907"} Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.644522 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.648351 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.673337 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.690512 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.690573 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.690608 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj2xn\" (UniqueName: \"kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.792825 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.792885 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.792929 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj2xn\" (UniqueName: \"kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.793544 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.793765 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.815708 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj2xn\" (UniqueName: \"kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.979085 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:25 crc kubenswrapper[4881]: I0121 12:55:25.497415 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:25 crc kubenswrapper[4881]: I0121 12:55:25.556280 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerStarted","Data":"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3"} Jan 21 12:55:25 crc kubenswrapper[4881]: I0121 12:55:25.558036 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerStarted","Data":"d2d30cea3f4802992aeeddc90e712708eb9ee514be369fa07ba0e9851856d338"} Jan 21 12:55:26 crc kubenswrapper[4881]: I0121 12:55:26.580951 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerStarted","Data":"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f"} Jan 21 12:55:27 crc kubenswrapper[4881]: I0121 12:55:27.599167 4881 generic.go:334] "Generic (PLEG): container finished" podID="bbd14e97-6383-426c-a806-89dc0439e483" containerID="44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f" exitCode=0 Jan 21 12:55:27 crc kubenswrapper[4881]: I0121 12:55:27.599239 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerDied","Data":"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f"} Jan 21 12:55:27 crc kubenswrapper[4881]: I0121 12:55:27.605699 4881 generic.go:334] "Generic (PLEG): container finished" podID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerID="d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3" exitCode=0 Jan 21 12:55:27 crc kubenswrapper[4881]: I0121 12:55:27.605874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerDied","Data":"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3"} Jan 21 12:55:28 crc kubenswrapper[4881]: I0121 12:55:28.618224 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerStarted","Data":"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d"} Jan 21 12:55:28 crc kubenswrapper[4881]: I0121 12:55:28.642944 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4k5sb" podStartSLOduration=3.151789721 podStartE2EDuration="6.642923115s" podCreationTimestamp="2026-01-21 12:55:22 +0000 UTC" firstStartedPulling="2026-01-21 12:55:24.540298077 +0000 UTC m=+7111.800254546" lastFinishedPulling="2026-01-21 12:55:28.031431471 +0000 UTC m=+7115.291387940" observedRunningTime="2026-01-21 12:55:28.639733528 +0000 UTC m=+7115.899690007" watchObservedRunningTime="2026-01-21 12:55:28.642923115 +0000 UTC m=+7115.902879594" Jan 21 12:55:29 crc kubenswrapper[4881]: I0121 12:55:29.634507 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerStarted","Data":"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b"} Jan 21 12:55:31 crc kubenswrapper[4881]: I0121 12:55:31.666358 4881 generic.go:334] "Generic (PLEG): container finished" podID="bbd14e97-6383-426c-a806-89dc0439e483" containerID="abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b" exitCode=0 Jan 21 12:55:31 crc kubenswrapper[4881]: I0121 12:55:31.666447 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerDied","Data":"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b"} Jan 21 12:55:32 crc kubenswrapper[4881]: I0121 12:55:32.789485 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:32 crc kubenswrapper[4881]: I0121 12:55:32.789813 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:33 crc kubenswrapper[4881]: I0121 12:55:33.697471 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerStarted","Data":"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b"} Jan 21 12:55:33 crc kubenswrapper[4881]: I0121 12:55:33.723201 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qlpzh" podStartSLOduration=4.067145268 podStartE2EDuration="9.723180422s" podCreationTimestamp="2026-01-21 12:55:24 +0000 UTC" firstStartedPulling="2026-01-21 12:55:27.603199748 +0000 UTC m=+7114.863156257" lastFinishedPulling="2026-01-21 12:55:33.259234932 +0000 UTC m=+7120.519191411" observedRunningTime="2026-01-21 12:55:33.722956356 +0000 UTC m=+7120.982912885" watchObservedRunningTime="2026-01-21 12:55:33.723180422 +0000 UTC m=+7120.983136891" Jan 21 12:55:33 crc kubenswrapper[4881]: I0121 12:55:33.851827 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4k5sb" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="registry-server" probeResult="failure" output=< Jan 21 12:55:33 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 12:55:33 crc kubenswrapper[4881]: > Jan 21 12:55:34 crc kubenswrapper[4881]: I0121 12:55:34.980640 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:34 crc kubenswrapper[4881]: I0121 12:55:34.981602 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:36 crc kubenswrapper[4881]: I0121 12:55:36.033085 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qlpzh" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="registry-server" probeResult="failure" output=< Jan 21 12:55:36 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 12:55:36 crc kubenswrapper[4881]: > Jan 21 12:55:42 crc kubenswrapper[4881]: I0121 12:55:42.865255 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:42 crc kubenswrapper[4881]: I0121 12:55:42.949362 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:43 crc kubenswrapper[4881]: I0121 12:55:43.114597 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:44 crc kubenswrapper[4881]: I0121 12:55:44.831849 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4k5sb" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="registry-server" containerID="cri-o://08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d" gracePeriod=2 Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.034996 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.102559 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.344819 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.483014 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities\") pod \"484fa13a-3d87-4fdb-926a-4bedccfa3140\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.483400 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhcmm\" (UniqueName: \"kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm\") pod \"484fa13a-3d87-4fdb-926a-4bedccfa3140\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.483589 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content\") pod \"484fa13a-3d87-4fdb-926a-4bedccfa3140\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.487030 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities" (OuterVolumeSpecName: "utilities") pod "484fa13a-3d87-4fdb-926a-4bedccfa3140" (UID: "484fa13a-3d87-4fdb-926a-4bedccfa3140"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.495103 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm" (OuterVolumeSpecName: "kube-api-access-nhcmm") pod "484fa13a-3d87-4fdb-926a-4bedccfa3140" (UID: "484fa13a-3d87-4fdb-926a-4bedccfa3140"). InnerVolumeSpecName "kube-api-access-nhcmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.518218 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.557924 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "484fa13a-3d87-4fdb-926a-4bedccfa3140" (UID: "484fa13a-3d87-4fdb-926a-4bedccfa3140"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.588091 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhcmm\" (UniqueName: \"kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.588138 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.588152 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.845120 4881 generic.go:334] "Generic (PLEG): container finished" podID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerID="08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d" exitCode=0 Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.845184 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.845295 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerDied","Data":"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d"} Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.845340 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerDied","Data":"11ebae3a888dec65882526f80a7bb92025588e02e26de48fd6e89454edb9d249"} Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.845364 4881 scope.go:117] "RemoveContainer" containerID="08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.870955 4881 scope.go:117] "RemoveContainer" containerID="d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.894953 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.896634 4881 scope.go:117] "RemoveContainer" containerID="466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.917926 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.940692 4881 scope.go:117] "RemoveContainer" containerID="08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d" Jan 21 12:55:45 crc kubenswrapper[4881]: E0121 12:55:45.941288 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d\": container with ID starting with 08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d not found: ID does not exist" containerID="08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.941344 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d"} err="failed to get container status \"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d\": rpc error: code = NotFound desc = could not find container \"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d\": container with ID starting with 08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d not found: ID does not exist" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.941371 4881 scope.go:117] "RemoveContainer" containerID="d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3" Jan 21 12:55:45 crc kubenswrapper[4881]: E0121 12:55:45.941618 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3\": container with ID starting with d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3 not found: ID does not exist" containerID="d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.941658 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3"} err="failed to get container status \"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3\": rpc error: code = NotFound desc = could not find container \"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3\": container with ID starting with d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3 not found: ID does not exist" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.941684 4881 scope.go:117] "RemoveContainer" containerID="466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907" Jan 21 12:55:45 crc kubenswrapper[4881]: E0121 12:55:45.942207 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907\": container with ID starting with 466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907 not found: ID does not exist" containerID="466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.942258 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907"} err="failed to get container status \"466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907\": rpc error: code = NotFound desc = could not find container \"466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907\": container with ID starting with 466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907 not found: ID does not exist" Jan 21 12:55:46 crc kubenswrapper[4881]: I0121 12:55:46.859656 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qlpzh" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="registry-server" containerID="cri-o://4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b" gracePeriod=2 Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.331929 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" path="/var/lib/kubelet/pods/484fa13a-3d87-4fdb-926a-4bedccfa3140/volumes" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.348881 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.360199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities\") pod \"bbd14e97-6383-426c-a806-89dc0439e483\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.360536 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content\") pod \"bbd14e97-6383-426c-a806-89dc0439e483\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.360668 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj2xn\" (UniqueName: \"kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn\") pod \"bbd14e97-6383-426c-a806-89dc0439e483\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.362672 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities" (OuterVolumeSpecName: "utilities") pod "bbd14e97-6383-426c-a806-89dc0439e483" (UID: "bbd14e97-6383-426c-a806-89dc0439e483"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.373080 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn" (OuterVolumeSpecName: "kube-api-access-wj2xn") pod "bbd14e97-6383-426c-a806-89dc0439e483" (UID: "bbd14e97-6383-426c-a806-89dc0439e483"). InnerVolumeSpecName "kube-api-access-wj2xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.434928 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbd14e97-6383-426c-a806-89dc0439e483" (UID: "bbd14e97-6383-426c-a806-89dc0439e483"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.463857 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.464213 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj2xn\" (UniqueName: \"kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.464289 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.880447 4881 generic.go:334] "Generic (PLEG): container finished" podID="bbd14e97-6383-426c-a806-89dc0439e483" containerID="4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b" exitCode=0 Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.880487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerDied","Data":"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b"} Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.880535 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerDied","Data":"d2d30cea3f4802992aeeddc90e712708eb9ee514be369fa07ba0e9851856d338"} Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.880554 4881 scope.go:117] "RemoveContainer" containerID="4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.880620 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.908140 4881 scope.go:117] "RemoveContainer" containerID="abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.940213 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.948272 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.963433 4881 scope.go:117] "RemoveContainer" containerID="44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.008092 4881 scope.go:117] "RemoveContainer" containerID="4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b" Jan 21 12:55:48 crc kubenswrapper[4881]: E0121 12:55:48.008941 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b\": container with ID starting with 4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b not found: ID does not exist" containerID="4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.009009 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b"} err="failed to get container status \"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b\": rpc error: code = NotFound desc = could not find container \"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b\": container with ID starting with 4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b not found: ID does not exist" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.009054 4881 scope.go:117] "RemoveContainer" containerID="abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b" Jan 21 12:55:48 crc kubenswrapper[4881]: E0121 12:55:48.009461 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b\": container with ID starting with abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b not found: ID does not exist" containerID="abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.009496 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b"} err="failed to get container status \"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b\": rpc error: code = NotFound desc = could not find container \"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b\": container with ID starting with abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b not found: ID does not exist" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.009520 4881 scope.go:117] "RemoveContainer" containerID="44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f" Jan 21 12:55:48 crc kubenswrapper[4881]: E0121 12:55:48.009821 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f\": container with ID starting with 44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f not found: ID does not exist" containerID="44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.009867 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f"} err="failed to get container status \"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f\": rpc error: code = NotFound desc = could not find container \"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f\": container with ID starting with 44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f not found: ID does not exist" Jan 21 12:55:49 crc kubenswrapper[4881]: I0121 12:55:49.326347 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd14e97-6383-426c-a806-89dc0439e483" path="/var/lib/kubelet/pods/bbd14e97-6383-426c-a806-89dc0439e483/volumes" Jan 21 12:55:59 crc kubenswrapper[4881]: I0121 12:55:59.851689 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:55:59 crc kubenswrapper[4881]: I0121 12:55:59.852265 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.820605 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822242 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="extract-utilities" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822275 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="extract-utilities" Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822304 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="extract-utilities" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822320 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="extract-utilities" Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822352 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822369 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822408 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="extract-content" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822423 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="extract-content" Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822448 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822463 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822493 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="extract-content" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822508 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="extract-content" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.823054 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.823110 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.827411 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.835499 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.987442 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.987882 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.987922 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85ss\" (UniqueName: \"kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.090415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.090491 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.090521 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q85ss\" (UniqueName: \"kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.091021 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.091425 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.116757 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q85ss\" (UniqueName: \"kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.166254 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.706428 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:16 crc kubenswrapper[4881]: W0121 12:56:16.718871 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31c806cc_58cd_40b7_972b_7d4e5a500a8a.slice/crio-6ddbba52e6459507301326b72602f0700b0b72510f5829c2bc18824b919047dc WatchSource:0}: Error finding container 6ddbba52e6459507301326b72602f0700b0b72510f5829c2bc18824b919047dc: Status 404 returned error can't find the container with id 6ddbba52e6459507301326b72602f0700b0b72510f5829c2bc18824b919047dc Jan 21 12:56:17 crc kubenswrapper[4881]: I0121 12:56:17.300898 4881 generic.go:334] "Generic (PLEG): container finished" podID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerID="84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016" exitCode=0 Jan 21 12:56:17 crc kubenswrapper[4881]: I0121 12:56:17.301087 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerDied","Data":"84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016"} Jan 21 12:56:17 crc kubenswrapper[4881]: I0121 12:56:17.301231 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerStarted","Data":"6ddbba52e6459507301326b72602f0700b0b72510f5829c2bc18824b919047dc"} Jan 21 12:56:19 crc kubenswrapper[4881]: I0121 12:56:19.332685 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerStarted","Data":"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d"} Jan 21 12:56:24 crc kubenswrapper[4881]: I0121 12:56:24.387880 4881 generic.go:334] "Generic (PLEG): container finished" podID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerID="44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d" exitCode=0 Jan 21 12:56:24 crc kubenswrapper[4881]: I0121 12:56:24.387983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerDied","Data":"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d"} Jan 21 12:56:25 crc kubenswrapper[4881]: I0121 12:56:25.401419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerStarted","Data":"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b"} Jan 21 12:56:25 crc kubenswrapper[4881]: I0121 12:56:25.423761 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m25nf" podStartSLOduration=2.812697162 podStartE2EDuration="10.423731895s" podCreationTimestamp="2026-01-21 12:56:15 +0000 UTC" firstStartedPulling="2026-01-21 12:56:17.303004944 +0000 UTC m=+7164.562961413" lastFinishedPulling="2026-01-21 12:56:24.914039667 +0000 UTC m=+7172.173996146" observedRunningTime="2026-01-21 12:56:25.421361407 +0000 UTC m=+7172.681317916" watchObservedRunningTime="2026-01-21 12:56:25.423731895 +0000 UTC m=+7172.683688374" Jan 21 12:56:26 crc kubenswrapper[4881]: I0121 12:56:26.167561 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:26 crc kubenswrapper[4881]: I0121 12:56:26.167889 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:27 crc kubenswrapper[4881]: I0121 12:56:27.470976 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m25nf" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="registry-server" probeResult="failure" output=< Jan 21 12:56:27 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 12:56:27 crc kubenswrapper[4881]: > Jan 21 12:56:29 crc kubenswrapper[4881]: I0121 12:56:29.850925 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:56:29 crc kubenswrapper[4881]: I0121 12:56:29.851242 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:56:36 crc kubenswrapper[4881]: I0121 12:56:36.249637 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:36 crc kubenswrapper[4881]: I0121 12:56:36.325958 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:39 crc kubenswrapper[4881]: I0121 12:56:39.770626 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:39 crc kubenswrapper[4881]: I0121 12:56:39.772236 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m25nf" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="registry-server" containerID="cri-o://701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b" gracePeriod=2 Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.257230 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.345573 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content\") pod \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.345688 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q85ss\" (UniqueName: \"kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss\") pod \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.345802 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities\") pod \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.346675 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities" (OuterVolumeSpecName: "utilities") pod "31c806cc-58cd-40b7-972b-7d4e5a500a8a" (UID: "31c806cc-58cd-40b7-972b-7d4e5a500a8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.354621 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss" (OuterVolumeSpecName: "kube-api-access-q85ss") pod "31c806cc-58cd-40b7-972b-7d4e5a500a8a" (UID: "31c806cc-58cd-40b7-972b-7d4e5a500a8a"). InnerVolumeSpecName "kube-api-access-q85ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.448827 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q85ss\" (UniqueName: \"kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss\") on node \"crc\" DevicePath \"\"" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.448870 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.513654 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31c806cc-58cd-40b7-972b-7d4e5a500a8a" (UID: "31c806cc-58cd-40b7-972b-7d4e5a500a8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.550227 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.607390 4881 generic.go:334] "Generic (PLEG): container finished" podID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerID="701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b" exitCode=0 Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.607440 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerDied","Data":"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b"} Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.607486 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerDied","Data":"6ddbba52e6459507301326b72602f0700b0b72510f5829c2bc18824b919047dc"} Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.607510 4881 scope.go:117] "RemoveContainer" containerID="701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.607564 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.667140 4881 scope.go:117] "RemoveContainer" containerID="44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.673647 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.684947 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.697716 4881 scope.go:117] "RemoveContainer" containerID="84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.743073 4881 scope.go:117] "RemoveContainer" containerID="701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b" Jan 21 12:56:40 crc kubenswrapper[4881]: E0121 12:56:40.743520 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b\": container with ID starting with 701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b not found: ID does not exist" containerID="701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.743563 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b"} err="failed to get container status \"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b\": rpc error: code = NotFound desc = could not find container \"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b\": container with ID starting with 701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b not found: ID does not exist" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.743592 4881 scope.go:117] "RemoveContainer" containerID="44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d" Jan 21 12:56:40 crc kubenswrapper[4881]: E0121 12:56:40.743947 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d\": container with ID starting with 44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d not found: ID does not exist" containerID="44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.744001 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d"} err="failed to get container status \"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d\": rpc error: code = NotFound desc = could not find container \"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d\": container with ID starting with 44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d not found: ID does not exist" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.744038 4881 scope.go:117] "RemoveContainer" containerID="84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016" Jan 21 12:56:40 crc kubenswrapper[4881]: E0121 12:56:40.744479 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016\": container with ID starting with 84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016 not found: ID does not exist" containerID="84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.744529 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016"} err="failed to get container status \"84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016\": rpc error: code = NotFound desc = could not find container \"84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016\": container with ID starting with 84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016 not found: ID does not exist" Jan 21 12:56:41 crc kubenswrapper[4881]: I0121 12:56:41.327314 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" path="/var/lib/kubelet/pods/31c806cc-58cd-40b7-972b-7d4e5a500a8a/volumes" Jan 21 12:56:59 crc kubenswrapper[4881]: I0121 12:56:59.899020 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:56:59 crc kubenswrapper[4881]: I0121 12:56:59.899536 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:56:59 crc kubenswrapper[4881]: I0121 12:56:59.899592 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:56:59 crc kubenswrapper[4881]: I0121 12:56:59.900452 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:56:59 crc kubenswrapper[4881]: I0121 12:56:59.900504 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035" gracePeriod=600 Jan 21 12:57:00 crc kubenswrapper[4881]: I0121 12:57:00.939256 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035" exitCode=0 Jan 21 12:57:00 crc kubenswrapper[4881]: I0121 12:57:00.939334 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035"} Jan 21 12:57:00 crc kubenswrapper[4881]: I0121 12:57:00.940096 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a"} Jan 21 12:57:00 crc kubenswrapper[4881]: I0121 12:57:00.940151 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.437492 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pd9dn/must-gather-wjn9v"] Jan 21 12:58:14 crc kubenswrapper[4881]: E0121 12:58:14.438398 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="extract-content" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.438424 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="extract-content" Jan 21 12:58:14 crc kubenswrapper[4881]: E0121 12:58:14.438446 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="extract-utilities" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.438452 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="extract-utilities" Jan 21 12:58:14 crc kubenswrapper[4881]: E0121 12:58:14.438480 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="registry-server" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.438487 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="registry-server" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.438744 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="registry-server" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.442439 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.449434 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pd9dn"/"kube-root-ca.crt" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.449738 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pd9dn"/"openshift-service-ca.crt" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.450014 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pd9dn"/"default-dockercfg-8m7sq" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.452619 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pd9dn/must-gather-wjn9v"] Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.573464 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvpvb\" (UniqueName: \"kubernetes.io/projected/ec6c7413-f699-442c-b92e-bbe40326dcb1-kube-api-access-wvpvb\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.573520 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec6c7413-f699-442c-b92e-bbe40326dcb1-must-gather-output\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.676650 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvpvb\" (UniqueName: \"kubernetes.io/projected/ec6c7413-f699-442c-b92e-bbe40326dcb1-kube-api-access-wvpvb\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.677028 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec6c7413-f699-442c-b92e-bbe40326dcb1-must-gather-output\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.677483 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec6c7413-f699-442c-b92e-bbe40326dcb1-must-gather-output\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.697673 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvpvb\" (UniqueName: \"kubernetes.io/projected/ec6c7413-f699-442c-b92e-bbe40326dcb1-kube-api-access-wvpvb\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.771385 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:15 crc kubenswrapper[4881]: I0121 12:58:15.255155 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pd9dn/must-gather-wjn9v"] Jan 21 12:58:16 crc kubenswrapper[4881]: I0121 12:58:16.205358 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" event={"ID":"ec6c7413-f699-442c-b92e-bbe40326dcb1","Type":"ContainerStarted","Data":"f8435432aef52b19bad8a8cd808ccfc704ccacec42b64e9e84020e60e34cf08a"} Jan 21 12:58:28 crc kubenswrapper[4881]: I0121 12:58:28.559622 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" event={"ID":"ec6c7413-f699-442c-b92e-bbe40326dcb1","Type":"ContainerStarted","Data":"a4e2dbbd606e451b55b6b34e41cf24c5d9baf413c001fc8ed6b035bceeebfbb1"} Jan 21 12:58:28 crc kubenswrapper[4881]: I0121 12:58:28.560391 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" event={"ID":"ec6c7413-f699-442c-b92e-bbe40326dcb1","Type":"ContainerStarted","Data":"9cd18be4060450a8e8728911060acaac370f6470f67553eea3230920b13495f5"} Jan 21 12:58:29 crc kubenswrapper[4881]: I0121 12:58:29.588593 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" podStartSLOduration=3.207365374 podStartE2EDuration="15.588553757s" podCreationTimestamp="2026-01-21 12:58:14 +0000 UTC" firstStartedPulling="2026-01-21 12:58:15.265857029 +0000 UTC m=+7282.525813498" lastFinishedPulling="2026-01-21 12:58:27.647045412 +0000 UTC m=+7294.907001881" observedRunningTime="2026-01-21 12:58:29.584772705 +0000 UTC m=+7296.844729184" watchObservedRunningTime="2026-01-21 12:58:29.588553757 +0000 UTC m=+7296.848510226" Jan 21 12:58:32 crc kubenswrapper[4881]: I0121 12:58:32.997765 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-r7kk4"] Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.000175 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.177704 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.178045 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfvbq\" (UniqueName: \"kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.280292 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.280342 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfvbq\" (UniqueName: \"kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.280810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.304648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfvbq\" (UniqueName: \"kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.327796 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: W0121 12:58:33.368280 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32ea9833_e257_4601_8be7_dcf0882d25ff.slice/crio-5015ec3924ce05a87e752db51205b1697e3330ac046050ee395aa7729f42795a WatchSource:0}: Error finding container 5015ec3924ce05a87e752db51205b1697e3330ac046050ee395aa7729f42795a: Status 404 returned error can't find the container with id 5015ec3924ce05a87e752db51205b1697e3330ac046050ee395aa7729f42795a Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.611021 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" event={"ID":"32ea9833-e257-4601-8be7-dcf0882d25ff","Type":"ContainerStarted","Data":"5015ec3924ce05a87e752db51205b1697e3330ac046050ee395aa7729f42795a"} Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.380738 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d6f7f4cc8-c4tt4_9bc5ed6a-2607-4a28-8bd3-949b0f0c761d/barbican-api-log/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.395028 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d6f7f4cc8-c4tt4_9bc5ed6a-2607-4a28-8bd3-949b0f0c761d/barbican-api/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.625698 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54f549c774-rnptw_6e80f53a-8873-4c07-b738-2854d9b8b089/barbican-keystone-listener-log/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.635264 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54f549c774-rnptw_6e80f53a-8873-4c07-b738-2854d9b8b089/barbican-keystone-listener/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.746658 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55755579c5-csgz2_90253f07-2dfb-48b3-9b75-34a653836589/barbican-worker-log/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.756042 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55755579c5-csgz2_90253f07-2dfb-48b3-9b75-34a653836589/barbican-worker/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.814071 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5_5930ee4f-c104-4ac5-9440-2a24d110fae5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:37 crc kubenswrapper[4881]: I0121 12:58:37.194547 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5926a818-11da-4b6b-bae0-79e6d9e10728/ceilometer-central-agent/0.log" Jan 21 12:58:37 crc kubenswrapper[4881]: I0121 12:58:37.479906 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5926a818-11da-4b6b-bae0-79e6d9e10728/ceilometer-notification-agent/0.log" Jan 21 12:58:37 crc kubenswrapper[4881]: I0121 12:58:37.487083 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5926a818-11da-4b6b-bae0-79e6d9e10728/sg-core/0.log" Jan 21 12:58:37 crc kubenswrapper[4881]: I0121 12:58:37.512993 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5926a818-11da-4b6b-bae0-79e6d9e10728/proxy-httpd/0.log" Jan 21 12:58:37 crc kubenswrapper[4881]: I0121 12:58:37.882025 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ae53e440-5bd5-41e3-8339-57eebaef03d2/cinder-api-log/0.log" Jan 21 12:58:38 crc kubenswrapper[4881]: I0121 12:58:38.276088 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ae53e440-5bd5-41e3-8339-57eebaef03d2/cinder-api/0.log" Jan 21 12:58:38 crc kubenswrapper[4881]: I0121 12:58:38.709153 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_306aceba-6a20-4b47-a19a-fb193a27e2bd/cinder-backup/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.204380 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_306aceba-6a20-4b47-a19a-fb193a27e2bd/probe/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.331203 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ab676e77-1ab3-4cab-9960-a00babfe74fb/cinder-scheduler/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.398172 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ab676e77-1ab3-4cab-9960-a00babfe74fb/probe/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.482186 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_8c912ca5-a82b-4083-8579-f0f6f506eebb/cinder-volume/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.807757 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_8c912ca5-a82b-4083-8579-f0f6f506eebb/probe/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.925592 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_112f53db-2aaa-4a3d-bc89-fd86952639ab/cinder-volume/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.990374 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_112f53db-2aaa-4a3d-bc89-fd86952639ab/probe/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.085751 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6_24a093f9-cd67-48f9-a18b-48d1a79a8aa0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.123139 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-c995r_f96dcee4-7734-4166-9a01-443c6ee66f86/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.282059 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59596cff49-cpxcq_a08dbd57-125f-4ca2-b166-434068ee9432/dnsmasq-dns/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.301984 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59596cff49-cpxcq_a08dbd57-125f-4ca2-b166-434068ee9432/init/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.348476 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt_01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.362941 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3e7b52fc-b295-475c-bef6-074b1cb2a2f5/glance-log/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.463232 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3e7b52fc-b295-475c-bef6-074b1cb2a2f5/glance-httpd/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.479059 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ec8e0779-1552-4ebb-88d7-95a49e734b55/glance-log/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.521558 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ec8e0779-1552-4ebb-88d7-95a49e734b55/glance-httpd/0.log" Jan 21 12:58:41 crc kubenswrapper[4881]: I0121 12:58:41.931329 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68b447d964-6llq5_07cdf1a8-aec4-42ca-a564-c91e7132663d/horizon-log/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.035718 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68b447d964-6llq5_07cdf1a8-aec4-42ca-a564-c91e7132663d/horizon/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.069037 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5l99l_1ef84c59-8554-4369-9f9f-877505b3b952/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.153677 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6khfl_3880ebda-d882-4e35-89e7-ef739a423a7d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.512358 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-857c5cc966-ggkc4_cacf36ac-8c52-43a6-9fcb-2cfc5b27a952/keystone-api/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.523544 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483281-5vf4h_d4b92750-a75d-44b9-b0ba-75296371fc59/keystone-cron/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.774977 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0e33ff3f-b508-4ac4-9a60-6189a65be2a6/kube-state-metrics/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.833306 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq_38ac646b-177b-488d-853b-e04b22f267a4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:47 crc kubenswrapper[4881]: I0121 12:58:47.837148 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" event={"ID":"32ea9833-e257-4601-8be7-dcf0882d25ff","Type":"ContainerStarted","Data":"0818ec9313f2fc50a748108c2a7b4170d06db46eb9b811376ec620220e592ebc"} Jan 21 12:58:47 crc kubenswrapper[4881]: I0121 12:58:47.868680 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" podStartSLOduration=1.998225284 podStartE2EDuration="15.868648755s" podCreationTimestamp="2026-01-21 12:58:32 +0000 UTC" firstStartedPulling="2026-01-21 12:58:33.370769403 +0000 UTC m=+7300.630725872" lastFinishedPulling="2026-01-21 12:58:47.241192874 +0000 UTC m=+7314.501149343" observedRunningTime="2026-01-21 12:58:47.863384797 +0000 UTC m=+7315.123341276" watchObservedRunningTime="2026-01-21 12:58:47.868648755 +0000 UTC m=+7315.128605224" Jan 21 12:59:00 crc kubenswrapper[4881]: I0121 12:59:00.736027 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/controller/0.log" Jan 21 12:59:00 crc kubenswrapper[4881]: I0121 12:59:00.756822 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/kube-rbac-proxy/0.log" Jan 21 12:59:00 crc kubenswrapper[4881]: I0121 12:59:00.808276 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/controller/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.110978 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.124845 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/reloader/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.130225 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr-metrics/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.150816 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.161391 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy-frr/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.178065 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-frr-files/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.190223 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-reloader/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.203642 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-metrics/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.230056 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-tzxpk_eaaea696-21d8-4963-8364-82fa7bbb0e19/frr-k8s-webhook-server/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.276587 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58bd8f8bd-8k4c9_769e47b6-bd47-489d-9b99-4f2f0e30c4fd/manager/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.286395 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5cd4664cfc-6lg4r_a194c95e-cbcb-4d7e-a631-d4a14989e985/webhook-server/0.log" Jan 21 12:59:05 crc kubenswrapper[4881]: I0121 12:59:05.471178 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/speaker/0.log" Jan 21 12:59:05 crc kubenswrapper[4881]: I0121 12:59:05.478941 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/kube-rbac-proxy/0.log" Jan 21 12:59:15 crc kubenswrapper[4881]: I0121 12:59:15.800433 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7960c16a-de64-4154-9072-aee49e3bd573/memcached/0.log" Jan 21 12:59:15 crc kubenswrapper[4881]: I0121 12:59:15.933476 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-667d9dbbbc-pcbhd_3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9/neutron-api/0.log" Jan 21 12:59:16 crc kubenswrapper[4881]: I0121 12:59:16.019877 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-667d9dbbbc-pcbhd_3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9/neutron-httpd/0.log" Jan 21 12:59:16 crc kubenswrapper[4881]: I0121 12:59:16.046081 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp_0e428246-daf9-40a4-9049-74281259f82c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:16 crc kubenswrapper[4881]: I0121 12:59:16.578980 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1188227a-462c-4c61-ae6e-96b55ffacd71/nova-api-log/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.452895 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1188227a-462c-4c61-ae6e-96b55ffacd71/nova-api-api/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.583830 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_dc5fb029-b5fa-4065-adb2-af2e634785fc/nova-cell0-conductor-conductor/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.685326 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_161c46d2-7b98-4a9e-a648-ce25b966f589/nova-cell1-conductor-conductor/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.786185 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b9ce9000-94ef-4f6e-8bc7-97feca616b9e/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.858973 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-t495m_bfc5a115-aedb-4364-8b0d-59b8379346cb/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.953009 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ba03e9fe-3ad6-4c52-bde7-bd41fca63834/nova-metadata-log/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.386210 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ba03e9fe-3ad6-4c52-bde7-bd41fca63834/nova-metadata-metadata/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.566702 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6f6e9d1b-902e-450b-8202-337c04c265ba/nova-scheduler-scheduler/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.597059 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cd1973a5-773b-438b-aab7-709fb828324d/galera/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.608316 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cd1973a5-773b-438b-aab7-709fb828324d/mysql-bootstrap/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.639665 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_197dd5bf-f68a-4d9d-b75c-de87a54ed46b/galera/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.653717 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_197dd5bf-f68a-4d9d-b75c-de87a54ed46b/mysql-bootstrap/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.663853 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b0b6ce2c-5ae8-496f-9374-d3069bc3d149/openstackclient/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.676036 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5dzhr_b9bd229b-588d-477e-8501-cd976b539e3a/openstack-network-exporter/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.690199 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2rtl8_9ff4a63e-40e5-4133-967e-9ba083f3603b/ovsdb-server/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.903858 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2rtl8_9ff4a63e-40e5-4133-967e-9ba083f3603b/ovs-vswitchd/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.911434 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2rtl8_9ff4a63e-40e5-4133-967e-9ba083f3603b/ovsdb-server-init/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.929586 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-s642n_256e0b4a-baac-415c-94c6-09f08fa09c7c/ovn-controller/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.997868 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-d4sgg_11ba18fa-d69e-4a6b-9796-e92d95d702ec/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.215706 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3882b01-10ce-4832-ae71-676a8b65b086/ovn-northd/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.232080 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3882b01-10ce-4832-ae71-676a8b65b086/openstack-network-exporter/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.251452 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_24136f67-aca3-4e40-b3c2-b36b7623475f/ovsdbserver-nb/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.261069 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_24136f67-aca3-4e40-b3c2-b36b7623475f/openstack-network-exporter/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.283430 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c3884c64-25d6-42b5-a309-7eafa170719e/ovsdbserver-sb/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.292549 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c3884c64-25d6-42b5-a309-7eafa170719e/openstack-network-exporter/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.455205 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59bf6c8c7b-wvc46_9358f706-24c3-46c5-8490-89402a85e9a4/placement-log/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.597139 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59bf6c8c7b-wvc46_9358f706-24c3-46c5-8490-89402a85e9a4/placement-api/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.616044 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4a412b1e-29ac-4420-920d-6054e2c03d53/prometheus/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.623474 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4a412b1e-29ac-4420-920d-6054e2c03d53/config-reloader/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.633843 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4a412b1e-29ac-4420-920d-6054e2c03d53/thanos-sidecar/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.640890 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4a412b1e-29ac-4420-920d-6054e2c03d53/init-config-reloader/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.683277 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_de7ea801-d184-48cf-a602-c82ff20892ff/rabbitmq/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.691085 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_de7ea801-d184-48cf-a602-c82ff20892ff/setup-container/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.722083 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_44bcf219-3358-4596-9d1e-88a51c415266/rabbitmq/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.728859 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_44bcf219-3358-4596-9d1e-88a51c415266/setup-container/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.776675 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_35a19b99-eed0-4383-bea5-cf43d84a5a3e/rabbitmq/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.781642 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_35a19b99-eed0-4383-bea5-cf43d84a5a3e/setup-container/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.802641 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn_828bd055-053d-43b7-b76f-746438bb9b41/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.813389 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vqzdk_dd495475-04cc-47b2-ad0e-7e3b83917ece/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.830865 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c_4a9e212c-bc4b-4dae-9c97-cbc48686c8fc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.842280 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7xfqr_af647318-40b6-4ce3-8f5b-c3af4c8dcb0d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.857153 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-dd2hk_157a809f-f6fa-43dc-b73d-380976da1312/ssh-known-hosts-edpm-deployment/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.091291 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7564f958f5-jmdx2_86a11f48-404e-4c5e-8ff4-5033a6411956/proxy-httpd/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.112948 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7564f958f5-jmdx2_86a11f48-404e-4c5e-8ff4-5033a6411956/proxy-server/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.130292 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-j29v8_27451133-57c8-4991-aae0-ec0a82432176/swift-ring-rebalance/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.176895 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/account-server/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.221606 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/account-replicator/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.230961 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/account-auditor/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.239379 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/account-reaper/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.249024 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/container-server/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.312024 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/container-replicator/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.329091 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/container-auditor/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.340310 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/container-updater/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.361927 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/object-server/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.399836 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/object-replicator/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.437321 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/object-auditor/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.454443 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/object-updater/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.466340 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/object-expirer/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.472828 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/rsync/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.482274 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/swift-recon-cron/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.547212 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr_2f9f4763-a2f6-4558-82fa-be718012fc12/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.800394 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b482979e-7a9e-4b89-846c-f50400adcf1b/tempest-tests-tempest-tests-runner/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.818644 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp_ec204ea7-b207-409b-8fa0-ff2847f7400a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:23 crc kubenswrapper[4881]: I0121 12:59:23.462097 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_bf14e65c-4c95-4766-a2e2-57b040e9f192/watcher-api-log/0.log" Jan 21 12:59:28 crc kubenswrapper[4881]: I0121 12:59:28.189422 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_bf14e65c-4c95-4766-a2e2-57b040e9f192/watcher-api/0.log" Jan 21 12:59:28 crc kubenswrapper[4881]: I0121 12:59:28.386648 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_937bcc33-ee83-4f94-ab76-84f534cfd05a/watcher-applier/0.log" Jan 21 12:59:29 crc kubenswrapper[4881]: I0121 12:59:29.799410 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_1a227ee4-7a4c-4cb6-991c-d137119a2a6e/watcher-decision-engine/0.log" Jan 21 12:59:29 crc kubenswrapper[4881]: I0121 12:59:29.851571 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:59:29 crc kubenswrapper[4881]: I0121 12:59:29.851684 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.081796 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/extract/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.092616 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/util/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.123899 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/pull/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.224300 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-svq8w_848fd8db-3bd5-4e22-96ca-f69b181e48be/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.289403 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-7qgck_a028dcae-6b9d-414d-8bab-652f301de541/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.326909 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-4wmln_36e5ddfe-67a4-4721-9ef5-b9459c64bf5c/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.408229 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-jv7cr_1f795f92-d385-49bc-bc91-5109734f4d5a/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.418473 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-zmgll_efb259b7-934f-4bc3-b502-633472d1a1c5/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.459504 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-bv8wz_bb9b2c3f-4f68-44fc-addf-2cf4421be015/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.816212 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-klgq4_2fe210a4-2adf-4b55-9a43-c1c390f51b0e/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.831835 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-5qcms_d0cafd1d-5f37-499a-a531-547a137aae21/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.914997 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-9zp7h_ba9a1249-fc58-4809-a472-d199afa9b52b/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.925050 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-h6dr4_b72b2323-5329-4145-9cee-b447d9e2a304/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.969305 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-s6gm8_4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f/manager/0.log" Jan 21 12:59:36 crc kubenswrapper[4881]: I0121 12:59:36.021001 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ncnww_c3b86204-5389-4b6a-bd45-fb6ee23b784e/manager/0.log" Jan 21 12:59:36 crc kubenswrapper[4881]: I0121 12:59:36.108803 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-798zt_761a1a49-e01e-4674-b1f4-da732e1def98/manager/0.log" Jan 21 12:59:36 crc kubenswrapper[4881]: I0121 12:59:36.124852 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-n7kgd_340257c4-9218-49b0-8a75-b2a4e0231fe3/manager/0.log" Jan 21 12:59:36 crc kubenswrapper[4881]: I0121 12:59:36.147368 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8544795q_b1b17be2-e382-4916-8e53-a68c85b5bfc2/manager/0.log" Jan 21 12:59:36 crc kubenswrapper[4881]: I0121 12:59:36.304172 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-766b56994f-7hsc6_3a9a96af-4c4b-45b4-ade0-688a9029cf7b/operator/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.727936 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-87d6d564b-ktcf8_a55fdb43-cd6c-4415-8ef6-07f6c7da6272/manager/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.737416 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7vz4j_0a051fc2-b6e4-463c-bb0a-b565d12b21b4/registry-server/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.792085 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-vpqw4_50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb/manager/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.815370 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jh4z9_e8e6f423-a07b-4a22-9e39-efa8de22747e/manager/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.849520 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-76qxc_8c8feeec-377c-499a-b666-895010f8ebeb/operator/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.891243 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-rk8l8_8c504afd-e4e0-4676-b292-b569b638a7dd/manager/0.log" Jan 21 12:59:38 crc kubenswrapper[4881]: I0121 12:59:38.055386 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-fcht4_55ce5ee6-47f4-4874-92dc-6ab78f2ce212/manager/0.log" Jan 21 12:59:38 crc kubenswrapper[4881]: I0121 12:59:38.072642 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-tttcz_2aac430e-3ac8-4624-8575-66386b5c2df3/manager/0.log" Jan 21 12:59:38 crc kubenswrapper[4881]: I0121 12:59:38.142581 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-849fd9b886-k9t7q_1cebbaaf-6189-409a-8f25-43d7fac77f95/manager/0.log" Jan 21 12:59:42 crc kubenswrapper[4881]: I0121 12:59:42.476213 4881 generic.go:334] "Generic (PLEG): container finished" podID="32ea9833-e257-4601-8be7-dcf0882d25ff" containerID="0818ec9313f2fc50a748108c2a7b4170d06db46eb9b811376ec620220e592ebc" exitCode=0 Jan 21 12:59:42 crc kubenswrapper[4881]: I0121 12:59:42.476313 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" event={"ID":"32ea9833-e257-4601-8be7-dcf0882d25ff","Type":"ContainerDied","Data":"0818ec9313f2fc50a748108c2a7b4170d06db46eb9b811376ec620220e592ebc"} Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.296040 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hfc8p_bc38f0b5-944c-40ae-aed0-50ca39ea2627/control-plane-machine-set-operator/0.log" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.324261 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cclnc_8465162e-dd9f-45b4-83a6-94666ac2b87b/kube-rbac-proxy/0.log" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.335061 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cclnc_8465162e-dd9f-45b4-83a6-94666ac2b87b/machine-api-operator/0.log" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.621252 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.663250 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-r7kk4"] Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.683619 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-r7kk4"] Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.705021 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host\") pod \"32ea9833-e257-4601-8be7-dcf0882d25ff\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.705160 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfvbq\" (UniqueName: \"kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq\") pod \"32ea9833-e257-4601-8be7-dcf0882d25ff\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.705289 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host" (OuterVolumeSpecName: "host") pod "32ea9833-e257-4601-8be7-dcf0882d25ff" (UID: "32ea9833-e257-4601-8be7-dcf0882d25ff"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.705995 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.714528 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq" (OuterVolumeSpecName: "kube-api-access-hfvbq") pod "32ea9833-e257-4601-8be7-dcf0882d25ff" (UID: "32ea9833-e257-4601-8be7-dcf0882d25ff"). InnerVolumeSpecName "kube-api-access-hfvbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.808138 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfvbq\" (UniqueName: \"kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.495392 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5015ec3924ce05a87e752db51205b1697e3330ac046050ee395aa7729f42795a" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.495399 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.941583 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-tvg8c"] Jan 21 12:59:44 crc kubenswrapper[4881]: E0121 12:59:44.942134 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ea9833-e257-4601-8be7-dcf0882d25ff" containerName="container-00" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.942147 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ea9833-e257-4601-8be7-dcf0882d25ff" containerName="container-00" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.950187 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ea9833-e257-4601-8be7-dcf0882d25ff" containerName="container-00" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.951826 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.029981 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg9cv\" (UniqueName: \"kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.030078 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.132346 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.132583 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.132642 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg9cv\" (UniqueName: \"kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.160454 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg9cv\" (UniqueName: \"kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.284370 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.336820 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ea9833-e257-4601-8be7-dcf0882d25ff" path="/var/lib/kubelet/pods/32ea9833-e257-4601-8be7-dcf0882d25ff/volumes" Jan 21 12:59:45 crc kubenswrapper[4881]: W0121 12:59:45.351741 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod617c663a_e61a_41e8_92f1_a847b84c7b5b.slice/crio-afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3 WatchSource:0}: Error finding container afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3: Status 404 returned error can't find the container with id afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3 Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.504436 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" event={"ID":"617c663a-e61a-41e8-92f1-a847b84c7b5b","Type":"ContainerStarted","Data":"afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3"} Jan 21 12:59:46 crc kubenswrapper[4881]: I0121 12:59:46.514973 4881 generic.go:334] "Generic (PLEG): container finished" podID="617c663a-e61a-41e8-92f1-a847b84c7b5b" containerID="adc0b5280c47db093a6ec180a9e5726fbeb5b4a901615e6f06978e816e37c4a2" exitCode=0 Jan 21 12:59:46 crc kubenswrapper[4881]: I0121 12:59:46.515183 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" event={"ID":"617c663a-e61a-41e8-92f1-a847b84c7b5b","Type":"ContainerDied","Data":"adc0b5280c47db093a6ec180a9e5726fbeb5b4a901615e6f06978e816e37c4a2"} Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.679966 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.790011 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host\") pod \"617c663a-e61a-41e8-92f1-a847b84c7b5b\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.790099 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host" (OuterVolumeSpecName: "host") pod "617c663a-e61a-41e8-92f1-a847b84c7b5b" (UID: "617c663a-e61a-41e8-92f1-a847b84c7b5b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.790195 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg9cv\" (UniqueName: \"kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv\") pod \"617c663a-e61a-41e8-92f1-a847b84c7b5b\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.790712 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.804990 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv" (OuterVolumeSpecName: "kube-api-access-kg9cv") pod "617c663a-e61a-41e8-92f1-a847b84c7b5b" (UID: "617c663a-e61a-41e8-92f1-a847b84c7b5b"). InnerVolumeSpecName "kube-api-access-kg9cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.892519 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg9cv\" (UniqueName: \"kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:48 crc kubenswrapper[4881]: I0121 12:59:48.534020 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" event={"ID":"617c663a-e61a-41e8-92f1-a847b84c7b5b","Type":"ContainerDied","Data":"afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3"} Jan 21 12:59:48 crc kubenswrapper[4881]: I0121 12:59:48.534075 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3" Jan 21 12:59:48 crc kubenswrapper[4881]: I0121 12:59:48.534102 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:48 crc kubenswrapper[4881]: I0121 12:59:48.860756 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-tvg8c"] Jan 21 12:59:48 crc kubenswrapper[4881]: I0121 12:59:48.870333 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-tvg8c"] Jan 21 12:59:49 crc kubenswrapper[4881]: I0121 12:59:49.329937 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617c663a-e61a-41e8-92f1-a847b84c7b5b" path="/var/lib/kubelet/pods/617c663a-e61a-41e8-92f1-a847b84c7b5b/volumes" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.071002 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-56wrj"] Jan 21 12:59:50 crc kubenswrapper[4881]: E0121 12:59:50.072484 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617c663a-e61a-41e8-92f1-a847b84c7b5b" containerName="container-00" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.072507 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="617c663a-e61a-41e8-92f1-a847b84c7b5b" containerName="container-00" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.072738 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="617c663a-e61a-41e8-92f1-a847b84c7b5b" containerName="container-00" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.073540 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.142631 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.142735 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5ttc\" (UniqueName: \"kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.245106 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.245188 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5ttc\" (UniqueName: \"kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.245208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.271999 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5ttc\" (UniqueName: \"kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.395996 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: W0121 12:59:50.428607 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecc6c59e_4b85_45d4_a592_46e269e622ee.slice/crio-49b0e7d6ab3de89535e08864f3dc88a4d76792539d3db9ddb7ab991ef1e1229d WatchSource:0}: Error finding container 49b0e7d6ab3de89535e08864f3dc88a4d76792539d3db9ddb7ab991ef1e1229d: Status 404 returned error can't find the container with id 49b0e7d6ab3de89535e08864f3dc88a4d76792539d3db9ddb7ab991ef1e1229d Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.551323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" event={"ID":"ecc6c59e-4b85-45d4-a592-46e269e622ee","Type":"ContainerStarted","Data":"49b0e7d6ab3de89535e08864f3dc88a4d76792539d3db9ddb7ab991ef1e1229d"} Jan 21 12:59:51 crc kubenswrapper[4881]: I0121 12:59:51.844596 4881 generic.go:334] "Generic (PLEG): container finished" podID="ecc6c59e-4b85-45d4-a592-46e269e622ee" containerID="d7393ff190dc0d36007c0eef8e475ccef4c110168796bf46e5bdb722b58eff4e" exitCode=0 Jan 21 12:59:51 crc kubenswrapper[4881]: I0121 12:59:51.844810 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" event={"ID":"ecc6c59e-4b85-45d4-a592-46e269e622ee","Type":"ContainerDied","Data":"d7393ff190dc0d36007c0eef8e475ccef4c110168796bf46e5bdb722b58eff4e"} Jan 21 12:59:51 crc kubenswrapper[4881]: I0121 12:59:51.901971 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-56wrj"] Jan 21 12:59:51 crc kubenswrapper[4881]: I0121 12:59:51.911590 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-56wrj"] Jan 21 12:59:52 crc kubenswrapper[4881]: I0121 12:59:52.482014 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-h2ttp_faf7e95d-07e7-4d3d-936b-26b187fc0b0c/cert-manager-controller/0.log" Jan 21 12:59:52 crc kubenswrapper[4881]: I0121 12:59:52.506507 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cdm4s_1d8014cf-8827-449d-b5fa-d0c098cc377e/cert-manager-cainjector/0.log" Jan 21 12:59:52 crc kubenswrapper[4881]: I0121 12:59:52.517010 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-csqtv_2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4/cert-manager-webhook/0.log" Jan 21 12:59:52 crc kubenswrapper[4881]: I0121 12:59:52.981906 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.168340 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host\") pod \"ecc6c59e-4b85-45d4-a592-46e269e622ee\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.168415 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5ttc\" (UniqueName: \"kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc\") pod \"ecc6c59e-4b85-45d4-a592-46e269e622ee\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.168478 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host" (OuterVolumeSpecName: "host") pod "ecc6c59e-4b85-45d4-a592-46e269e622ee" (UID: "ecc6c59e-4b85-45d4-a592-46e269e622ee"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.169141 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.182050 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc" (OuterVolumeSpecName: "kube-api-access-p5ttc") pod "ecc6c59e-4b85-45d4-a592-46e269e622ee" (UID: "ecc6c59e-4b85-45d4-a592-46e269e622ee"). InnerVolumeSpecName "kube-api-access-p5ttc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.271035 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5ttc\" (UniqueName: \"kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.323919 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc6c59e-4b85-45d4-a592-46e269e622ee" path="/var/lib/kubelet/pods/ecc6c59e-4b85-45d4-a592-46e269e622ee/volumes" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.872432 4881 scope.go:117] "RemoveContainer" containerID="d7393ff190dc0d36007c0eef8e475ccef4c110168796bf46e5bdb722b58eff4e" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.872484 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.526060 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-lgdjc_fcdadd73-568f-4ae0-a7bb-9330b2feb835/nmstate-console-plugin/0.log" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.580150 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b9rcw_5c705c83-efa0-436f-a0b5-9164dbb6b1df/nmstate-handler/0.log" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.589894 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ft48b_f68408aa-3450-42af-a6f8-b5260973f272/nmstate-metrics/0.log" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.599368 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ft48b_f68408aa-3450-42af-a6f8-b5260973f272/kube-rbac-proxy/0.log" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.612795 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-zlxs9_14878b0e-37cc-4c03-89df-ba23d94589a0/nmstate-operator/0.log" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.647729 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-qmv5k_b6262b8c-2531-4008-9bb8-c3beeb66a3ed/nmstate-webhook/0.log" Jan 21 12:59:59 crc kubenswrapper[4881]: I0121 12:59:59.850639 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:59:59 crc kubenswrapper[4881]: I0121 12:59:59.851211 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.187923 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4"] Jan 21 13:00:00 crc kubenswrapper[4881]: E0121 13:00:00.188457 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc6c59e-4b85-45d4-a592-46e269e622ee" containerName="container-00" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.188477 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc6c59e-4b85-45d4-a592-46e269e622ee" containerName="container-00" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.188699 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc6c59e-4b85-45d4-a592-46e269e622ee" containerName="container-00" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.189567 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.195259 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.200302 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.201357 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4"] Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.345003 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z78wd\" (UniqueName: \"kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.345264 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.345555 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.447661 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z78wd\" (UniqueName: \"kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.447779 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.447852 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.448945 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.453817 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.467507 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z78wd\" (UniqueName: \"kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.519750 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:01 crc kubenswrapper[4881]: I0121 13:00:01.102190 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4"] Jan 21 13:00:01 crc kubenswrapper[4881]: E0121 13:00:01.945450 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3d03c94_fe93_4321_a2a8_44fc4e42cecf.slice/crio-c991ea82acb208ee5146cd2f274afea24486b30d08f10d3df4a9a9be6e57a12c.scope\": RecentStats: unable to find data in memory cache]" Jan 21 13:00:01 crc kubenswrapper[4881]: I0121 13:00:01.951943 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" event={"ID":"a3d03c94-fe93-4321-a2a8-44fc4e42cecf","Type":"ContainerStarted","Data":"c991ea82acb208ee5146cd2f274afea24486b30d08f10d3df4a9a9be6e57a12c"} Jan 21 13:00:01 crc kubenswrapper[4881]: I0121 13:00:01.951987 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" event={"ID":"a3d03c94-fe93-4321-a2a8-44fc4e42cecf","Type":"ContainerStarted","Data":"ca539054649ad7498aa368328f6ff7d3f04b6d41dd101ce5698d9930259deeae"} Jan 21 13:00:01 crc kubenswrapper[4881]: I0121 13:00:01.979410 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" podStartSLOduration=1.9793712289999998 podStartE2EDuration="1.979371229s" podCreationTimestamp="2026-01-21 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:00:01.973671361 +0000 UTC m=+7389.233627830" watchObservedRunningTime="2026-01-21 13:00:01.979371229 +0000 UTC m=+7389.239327698" Jan 21 13:00:02 crc kubenswrapper[4881]: I0121 13:00:02.965671 4881 generic.go:334] "Generic (PLEG): container finished" podID="a3d03c94-fe93-4321-a2a8-44fc4e42cecf" containerID="c991ea82acb208ee5146cd2f274afea24486b30d08f10d3df4a9a9be6e57a12c" exitCode=0 Jan 21 13:00:02 crc kubenswrapper[4881]: I0121 13:00:02.965754 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" event={"ID":"a3d03c94-fe93-4321-a2a8-44fc4e42cecf","Type":"ContainerDied","Data":"c991ea82acb208ee5146cd2f274afea24486b30d08f10d3df4a9a9be6e57a12c"} Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.427474 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.469912 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume\") pod \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.471698 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume\") pod \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.473751 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z78wd\" (UniqueName: \"kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd\") pod \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.475237 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume" (OuterVolumeSpecName: "config-volume") pod "a3d03c94-fe93-4321-a2a8-44fc4e42cecf" (UID: "a3d03c94-fe93-4321-a2a8-44fc4e42cecf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.479761 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd" (OuterVolumeSpecName: "kube-api-access-z78wd") pod "a3d03c94-fe93-4321-a2a8-44fc4e42cecf" (UID: "a3d03c94-fe93-4321-a2a8-44fc4e42cecf"). InnerVolumeSpecName "kube-api-access-z78wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.497883 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a3d03c94-fe93-4321-a2a8-44fc4e42cecf" (UID: "a3d03c94-fe93-4321-a2a8-44fc4e42cecf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.577366 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.577842 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.577856 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z78wd\" (UniqueName: \"kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd\") on node \"crc\" DevicePath \"\"" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.986988 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" event={"ID":"a3d03c94-fe93-4321-a2a8-44fc4e42cecf","Type":"ContainerDied","Data":"ca539054649ad7498aa368328f6ff7d3f04b6d41dd101ce5698d9930259deeae"} Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.987043 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca539054649ad7498aa368328f6ff7d3f04b6d41dd101ce5698d9930259deeae" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.987067 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.037428 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rp92p_999c36a2-9f08-4da1-b14a-859ac888ae38/prometheus-operator/0.log" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.077932 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c"] Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.085600 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-h5vzg_c2181303-fd96-43e5-b6f2-158cca65c0b4/prometheus-operator-admission-webhook/0.log" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.097238 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c"] Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.097911 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-n5xvb_952218f5-7dfc-40d5-a1df-2c462e1e4dcc/prometheus-operator-admission-webhook/0.log" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.142653 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tfzsc_19be64a6-6795-4219-8d58-47f744ef8e13/operator/0.log" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.154435 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6srxm_1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50/perses-operator/0.log" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.402985 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22846423-24bd-4d85-b2da-a5c75401cd25" path="/var/lib/kubelet/pods/22846423-24bd-4d85-b2da-a5c75401cd25/volumes" Jan 21 13:00:11 crc kubenswrapper[4881]: I0121 13:00:11.300822 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/controller/0.log" Jan 21 13:00:11 crc kubenswrapper[4881]: I0121 13:00:11.308567 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/kube-rbac-proxy/0.log" Jan 21 13:00:11 crc kubenswrapper[4881]: I0121 13:00:11.334656 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/controller/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.200755 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.212322 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/reloader/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.217191 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr-metrics/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.223303 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.232182 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy-frr/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.238758 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-frr-files/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.245640 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-reloader/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.252355 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-metrics/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.267798 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-tzxpk_eaaea696-21d8-4963-8364-82fa7bbb0e19/frr-k8s-webhook-server/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.292727 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58bd8f8bd-8k4c9_769e47b6-bd47-489d-9b99-4f2f0e30c4fd/manager/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.301419 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5cd4664cfc-6lg4r_a194c95e-cbcb-4d7e-a631-d4a14989e985/webhook-server/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.699063 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/speaker/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.709972 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/kube-rbac-proxy/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.492673 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6_5c9dc897-764d-4f6c-ade8-99d7aa2d8d60/extract/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.503120 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6_5c9dc897-764d-4f6c-ade8-99d7aa2d8d60/util/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.513305 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6_5c9dc897-764d-4f6c-ade8-99d7aa2d8d60/pull/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.524558 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq_1bb22c78-c1fd-422e-900a-52c4b73fb451/extract/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.536747 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq_1bb22c78-c1fd-422e-900a-52c4b73fb451/util/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.559756 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq_1bb22c78-c1fd-422e-900a-52c4b73fb451/pull/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.577546 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x_31ed4736-a43c-4891-aeb4-e09d573a30b3/extract/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.588616 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x_31ed4736-a43c-4891-aeb4-e09d573a30b3/util/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.596320 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x_31ed4736-a43c-4891-aeb4-e09d573a30b3/pull/0.log" Jan 21 13:00:18 crc kubenswrapper[4881]: I0121 13:00:18.805458 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7wxr8_6e9defc7-ad37-4742-b149-cb71d7ea177a/registry-server/0.log" Jan 21 13:00:18 crc kubenswrapper[4881]: I0121 13:00:18.812535 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7wxr8_6e9defc7-ad37-4742-b149-cb71d7ea177a/extract-utilities/0.log" Jan 21 13:00:18 crc kubenswrapper[4881]: I0121 13:00:18.819172 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7wxr8_6e9defc7-ad37-4742-b149-cb71d7ea177a/extract-content/0.log" Jan 21 13:00:19 crc kubenswrapper[4881]: I0121 13:00:19.956600 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bn24k_cb2faf64-08ef-4413-84f0-10e88dcb7a8f/registry-server/0.log" Jan 21 13:00:19 crc kubenswrapper[4881]: I0121 13:00:19.962613 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bn24k_cb2faf64-08ef-4413-84f0-10e88dcb7a8f/extract-utilities/0.log" Jan 21 13:00:19 crc kubenswrapper[4881]: I0121 13:00:19.969247 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bn24k_cb2faf64-08ef-4413-84f0-10e88dcb7a8f/extract-content/0.log" Jan 21 13:00:19 crc kubenswrapper[4881]: I0121 13:00:19.983745 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vrcvz_98f0e6fe-f27f-4d75-9149-6238b2220849/marketplace-operator/0.log" Jan 21 13:00:20 crc kubenswrapper[4881]: I0121 13:00:20.294389 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rs9gj_c6d87675-513f-412d-a34c-d789cce5b4e8/registry-server/0.log" Jan 21 13:00:20 crc kubenswrapper[4881]: I0121 13:00:20.301477 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rs9gj_c6d87675-513f-412d-a34c-d789cce5b4e8/extract-utilities/0.log" Jan 21 13:00:20 crc kubenswrapper[4881]: I0121 13:00:20.307604 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rs9gj_c6d87675-513f-412d-a34c-d789cce5b4e8/extract-content/0.log" Jan 21 13:00:21 crc kubenswrapper[4881]: I0121 13:00:21.336159 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kfzl8_8ab3938c-6614-4877-a94c-75b90f339523/registry-server/0.log" Jan 21 13:00:21 crc kubenswrapper[4881]: I0121 13:00:21.341898 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kfzl8_8ab3938c-6614-4877-a94c-75b90f339523/extract-utilities/0.log" Jan 21 13:00:21 crc kubenswrapper[4881]: I0121 13:00:21.349934 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kfzl8_8ab3938c-6614-4877-a94c-75b90f339523/extract-content/0.log" Jan 21 13:00:24 crc kubenswrapper[4881]: I0121 13:00:24.348993 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rp92p_999c36a2-9f08-4da1-b14a-859ac888ae38/prometheus-operator/0.log" Jan 21 13:00:24 crc kubenswrapper[4881]: I0121 13:00:24.377182 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-h5vzg_c2181303-fd96-43e5-b6f2-158cca65c0b4/prometheus-operator-admission-webhook/0.log" Jan 21 13:00:24 crc kubenswrapper[4881]: I0121 13:00:24.386584 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-n5xvb_952218f5-7dfc-40d5-a1df-2c462e1e4dcc/prometheus-operator-admission-webhook/0.log" Jan 21 13:00:24 crc kubenswrapper[4881]: I0121 13:00:24.426221 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tfzsc_19be64a6-6795-4219-8d58-47f744ef8e13/operator/0.log" Jan 21 13:00:24 crc kubenswrapper[4881]: I0121 13:00:24.435941 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6srxm_1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50/perses-operator/0.log" Jan 21 13:00:29 crc kubenswrapper[4881]: I0121 13:00:29.851921 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:00:29 crc kubenswrapper[4881]: I0121 13:00:29.852573 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:00:29 crc kubenswrapper[4881]: I0121 13:00:29.852640 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:00:29 crc kubenswrapper[4881]: I0121 13:00:29.853698 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:00:29 crc kubenswrapper[4881]: I0121 13:00:29.853840 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" gracePeriod=600 Jan 21 13:00:29 crc kubenswrapper[4881]: E0121 13:00:29.982088 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:00:30 crc kubenswrapper[4881]: I0121 13:00:30.331615 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" exitCode=0 Jan 21 13:00:30 crc kubenswrapper[4881]: I0121 13:00:30.331678 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a"} Jan 21 13:00:30 crc kubenswrapper[4881]: I0121 13:00:30.331729 4881 scope.go:117] "RemoveContainer" containerID="dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035" Jan 21 13:00:30 crc kubenswrapper[4881]: I0121 13:00:30.332711 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:00:30 crc kubenswrapper[4881]: E0121 13:00:30.333203 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:00:35 crc kubenswrapper[4881]: I0121 13:00:35.140816 4881 scope.go:117] "RemoveContainer" containerID="bf9af12b6f88ac7a2c2f3b75d58737d697a4cfe360d0edd4e874140a2c1b67eb" Jan 21 13:00:45 crc kubenswrapper[4881]: I0121 13:00:45.310710 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:00:45 crc kubenswrapper[4881]: E0121 13:00:45.311660 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:00:57 crc kubenswrapper[4881]: I0121 13:00:57.315292 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:00:57 crc kubenswrapper[4881]: E0121 13:00:57.316102 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.154510 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483341-vfrqn"] Jan 21 13:01:00 crc kubenswrapper[4881]: E0121 13:01:00.155532 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d03c94-fe93-4321-a2a8-44fc4e42cecf" containerName="collect-profiles" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.155546 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d03c94-fe93-4321-a2a8-44fc4e42cecf" containerName="collect-profiles" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.155802 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d03c94-fe93-4321-a2a8-44fc4e42cecf" containerName="collect-profiles" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.156797 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.172508 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483341-vfrqn"] Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.322832 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn59g\" (UniqueName: \"kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.323417 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.324857 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.325327 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.427717 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.427851 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.428031 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.428133 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn59g\" (UniqueName: \"kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.437532 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.438896 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.441357 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.462810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn59g\" (UniqueName: \"kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.488151 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:01 crc kubenswrapper[4881]: I0121 13:01:01.020853 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483341-vfrqn"] Jan 21 13:01:01 crc kubenswrapper[4881]: I0121 13:01:01.680521 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483341-vfrqn" event={"ID":"31661525-070b-49cf-aacb-1c845c697019","Type":"ContainerStarted","Data":"ba793499a48deef1e2360f820f6470dfc6c8e5503512124453c90760305db802"} Jan 21 13:01:01 crc kubenswrapper[4881]: I0121 13:01:01.680840 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483341-vfrqn" event={"ID":"31661525-070b-49cf-aacb-1c845c697019","Type":"ContainerStarted","Data":"4f1e0945a56b36d21713ae3bdeed7a4a2e74eb2ddbd92c68658409cb2bfbca03"} Jan 21 13:01:01 crc kubenswrapper[4881]: I0121 13:01:01.705878 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483341-vfrqn" podStartSLOduration=1.705850256 podStartE2EDuration="1.705850256s" podCreationTimestamp="2026-01-21 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:01:01.697135715 +0000 UTC m=+7448.957092184" watchObservedRunningTime="2026-01-21 13:01:01.705850256 +0000 UTC m=+7448.965806725" Jan 21 13:01:06 crc kubenswrapper[4881]: I0121 13:01:06.732029 4881 generic.go:334] "Generic (PLEG): container finished" podID="31661525-070b-49cf-aacb-1c845c697019" containerID="ba793499a48deef1e2360f820f6470dfc6c8e5503512124453c90760305db802" exitCode=0 Jan 21 13:01:06 crc kubenswrapper[4881]: I0121 13:01:06.732119 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483341-vfrqn" event={"ID":"31661525-070b-49cf-aacb-1c845c697019","Type":"ContainerDied","Data":"ba793499a48deef1e2360f820f6470dfc6c8e5503512124453c90760305db802"} Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.208950 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.319039 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle\") pod \"31661525-070b-49cf-aacb-1c845c697019\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.319110 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data\") pod \"31661525-070b-49cf-aacb-1c845c697019\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.319280 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn59g\" (UniqueName: \"kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g\") pod \"31661525-070b-49cf-aacb-1c845c697019\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.319420 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys\") pod \"31661525-070b-49cf-aacb-1c845c697019\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.340199 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g" (OuterVolumeSpecName: "kube-api-access-dn59g") pod "31661525-070b-49cf-aacb-1c845c697019" (UID: "31661525-070b-49cf-aacb-1c845c697019"). InnerVolumeSpecName "kube-api-access-dn59g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.340361 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "31661525-070b-49cf-aacb-1c845c697019" (UID: "31661525-070b-49cf-aacb-1c845c697019"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.354966 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31661525-070b-49cf-aacb-1c845c697019" (UID: "31661525-070b-49cf-aacb-1c845c697019"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.383090 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data" (OuterVolumeSpecName: "config-data") pod "31661525-070b-49cf-aacb-1c845c697019" (UID: "31661525-070b-49cf-aacb-1c845c697019"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.422822 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn59g\" (UniqueName: \"kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g\") on node \"crc\" DevicePath \"\"" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.422865 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.422875 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.422884 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.755855 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483341-vfrqn" event={"ID":"31661525-070b-49cf-aacb-1c845c697019","Type":"ContainerDied","Data":"4f1e0945a56b36d21713ae3bdeed7a4a2e74eb2ddbd92c68658409cb2bfbca03"} Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.755891 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.755901 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f1e0945a56b36d21713ae3bdeed7a4a2e74eb2ddbd92c68658409cb2bfbca03" Jan 21 13:01:10 crc kubenswrapper[4881]: I0121 13:01:10.311166 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:01:10 crc kubenswrapper[4881]: E0121 13:01:10.311852 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:21 crc kubenswrapper[4881]: I0121 13:01:21.310899 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:01:21 crc kubenswrapper[4881]: E0121 13:01:21.311646 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:34 crc kubenswrapper[4881]: I0121 13:01:34.313201 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:01:34 crc kubenswrapper[4881]: E0121 13:01:34.313910 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:45 crc kubenswrapper[4881]: I0121 13:01:45.312933 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:01:45 crc kubenswrapper[4881]: E0121 13:01:45.313840 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.445090 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rp92p_999c36a2-9f08-4da1-b14a-859ac888ae38/prometheus-operator/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.464808 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-h5vzg_c2181303-fd96-43e5-b6f2-158cca65c0b4/prometheus-operator-admission-webhook/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.479544 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-n5xvb_952218f5-7dfc-40d5-a1df-2c462e1e4dcc/prometheus-operator-admission-webhook/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.515570 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tfzsc_19be64a6-6795-4219-8d58-47f744ef8e13/operator/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.530466 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6srxm_1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50/perses-operator/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.742998 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-h2ttp_faf7e95d-07e7-4d3d-936b-26b187fc0b0c/cert-manager-controller/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.756735 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cdm4s_1d8014cf-8827-449d-b5fa-d0c098cc377e/cert-manager-cainjector/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.767194 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-csqtv_2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4/cert-manager-webhook/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.713183 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/controller/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.719493 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/kube-rbac-proxy/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.741961 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/controller/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.743643 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/extract/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.751161 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/util/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.762771 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/pull/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.968722 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-svq8w_848fd8db-3bd5-4e22-96ca-f69b181e48be/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.064087 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-7qgck_a028dcae-6b9d-414d-8bab-652f301de541/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.076593 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-4wmln_36e5ddfe-67a4-4721-9ef5-b9459c64bf5c/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.193728 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-jv7cr_1f795f92-d385-49bc-bc91-5109734f4d5a/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.205142 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-zmgll_efb259b7-934f-4bc3-b502-633472d1a1c5/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.254500 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-bv8wz_bb9b2c3f-4f68-44fc-addf-2cf4421be015/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.869428 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-klgq4_2fe210a4-2adf-4b55-9a43-c1c390f51b0e/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.885657 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-5qcms_d0cafd1d-5f37-499a-a531-547a137aae21/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.082190 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-9zp7h_ba9a1249-fc58-4809-a472-d199afa9b52b/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.097897 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-h6dr4_b72b2323-5329-4145-9cee-b447d9e2a304/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.159207 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-s6gm8_4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.242226 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ncnww_c3b86204-5389-4b6a-bd45-fb6ee23b784e/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.451564 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-798zt_761a1a49-e01e-4674-b1f4-da732e1def98/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.462539 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-n7kgd_340257c4-9218-49b0-8a75-b2a4e0231fe3/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.487668 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8544795q_b1b17be2-e382-4916-8e53-a68c85b5bfc2/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.826137 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-766b56994f-7hsc6_3a9a96af-4c4b-45b4-ade0-688a9029cf7b/operator/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.823313 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.835626 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/reloader/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.841118 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr-metrics/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.853761 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.863829 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy-frr/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.870613 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-frr-files/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.881708 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-reloader/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.890332 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-metrics/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.907644 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-tzxpk_eaaea696-21d8-4963-8364-82fa7bbb0e19/frr-k8s-webhook-server/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.939494 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58bd8f8bd-8k4c9_769e47b6-bd47-489d-9b99-4f2f0e30c4fd/manager/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.949074 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5cd4664cfc-6lg4r_a194c95e-cbcb-4d7e-a631-d4a14989e985/webhook-server/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.550272 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/speaker/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.559459 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/kube-rbac-proxy/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.667132 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-87d6d564b-ktcf8_a55fdb43-cd6c-4415-8ef6-07f6c7da6272/manager/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.680643 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7vz4j_0a051fc2-b6e4-463c-bb0a-b565d12b21b4/registry-server/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.742277 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-vpqw4_50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb/manager/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.767522 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jh4z9_e8e6f423-a07b-4a22-9e39-efa8de22747e/manager/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.800142 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-76qxc_8c8feeec-377c-499a-b666-895010f8ebeb/operator/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.834361 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-rk8l8_8c504afd-e4e0-4676-b292-b569b638a7dd/manager/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.062824 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-fcht4_55ce5ee6-47f4-4874-92dc-6ab78f2ce212/manager/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.080292 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-tttcz_2aac430e-3ac8-4624-8575-66386b5c2df3/manager/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.163380 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-849fd9b886-k9t7q_1cebbaaf-6189-409a-8f25-43d7fac77f95/manager/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.312747 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:01:58 crc kubenswrapper[4881]: E0121 13:01:58.313275 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.533704 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-h2ttp_faf7e95d-07e7-4d3d-936b-26b187fc0b0c/cert-manager-controller/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.548893 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cdm4s_1d8014cf-8827-449d-b5fa-d0c098cc377e/cert-manager-cainjector/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.558403 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-csqtv_2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4/cert-manager-webhook/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.160586 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hfc8p_bc38f0b5-944c-40ae-aed0-50ca39ea2627/control-plane-machine-set-operator/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.181043 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cclnc_8465162e-dd9f-45b4-83a6-94666ac2b87b/kube-rbac-proxy/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.194160 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cclnc_8465162e-dd9f-45b4-83a6-94666ac2b87b/machine-api-operator/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.582941 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-lgdjc_fcdadd73-568f-4ae0-a7bb-9330b2feb835/nmstate-console-plugin/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.603738 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b9rcw_5c705c83-efa0-436f-a0b5-9164dbb6b1df/nmstate-handler/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.617533 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ft48b_f68408aa-3450-42af-a6f8-b5260973f272/nmstate-metrics/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.636461 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ft48b_f68408aa-3450-42af-a6f8-b5260973f272/kube-rbac-proxy/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.650260 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-zlxs9_14878b0e-37cc-4c03-89df-ba23d94589a0/nmstate-operator/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.707561 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-qmv5k_b6262b8c-2531-4008-9bb8-c3beeb66a3ed/nmstate-webhook/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.087655 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/extract/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.092988 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/util/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.103836 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/pull/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.174636 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-svq8w_848fd8db-3bd5-4e22-96ca-f69b181e48be/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.248373 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-7qgck_a028dcae-6b9d-414d-8bab-652f301de541/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.263290 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-4wmln_36e5ddfe-67a4-4721-9ef5-b9459c64bf5c/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.330845 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-jv7cr_1f795f92-d385-49bc-bc91-5109734f4d5a/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.406995 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-zmgll_efb259b7-934f-4bc3-b502-633472d1a1c5/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.545916 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-bv8wz_bb9b2c3f-4f68-44fc-addf-2cf4421be015/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.834805 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-klgq4_2fe210a4-2adf-4b55-9a43-c1c390f51b0e/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.858146 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-5qcms_d0cafd1d-5f37-499a-a531-547a137aae21/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.948235 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-9zp7h_ba9a1249-fc58-4809-a472-d199afa9b52b/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.981348 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-h6dr4_b72b2323-5329-4145-9cee-b447d9e2a304/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.062674 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-s6gm8_4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.105744 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ncnww_c3b86204-5389-4b6a-bd45-fb6ee23b784e/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.170921 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-798zt_761a1a49-e01e-4674-b1f4-da732e1def98/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.184889 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-n7kgd_340257c4-9218-49b0-8a75-b2a4e0231fe3/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.213935 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8544795q_b1b17be2-e382-4916-8e53-a68c85b5bfc2/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.399026 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-766b56994f-7hsc6_3a9a96af-4c4b-45b4-ade0-688a9029cf7b/operator/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.379423 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-87d6d564b-ktcf8_a55fdb43-cd6c-4415-8ef6-07f6c7da6272/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.409409 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7vz4j_0a051fc2-b6e4-463c-bb0a-b565d12b21b4/registry-server/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.489993 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-vpqw4_50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.543571 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jh4z9_e8e6f423-a07b-4a22-9e39-efa8de22747e/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.582824 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-76qxc_8c8feeec-377c-499a-b666-895010f8ebeb/operator/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.669330 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-rk8l8_8c504afd-e4e0-4676-b292-b569b638a7dd/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.842193 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-fcht4_55ce5ee6-47f4-4874-92dc-6ab78f2ce212/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.858038 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-tttcz_2aac430e-3ac8-4624-8575-66386b5c2df3/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.926618 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-849fd9b886-k9t7q_1cebbaaf-6189-409a-8f25-43d7fac77f95/manager/0.log" Jan 21 13:02:03 crc kubenswrapper[4881]: I0121 13:02:03.085393 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.023956 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/kube-multus-additional-cni-plugins/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.034974 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/egress-router-binary-copy/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.043933 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/cni-plugins/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.053107 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/bond-cni-plugin/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.060648 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/routeoverride-cni/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.069046 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/whereabouts-cni-bincopy/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.079806 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/whereabouts-cni/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.120827 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-j4s5w_6742e18f-a187-4a77-a734-bdec89bd89e0/multus-admission-controller/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.127574 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-j4s5w_6742e18f-a187-4a77-a734-bdec89bd89e0/kube-rbac-proxy/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.190411 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/1.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.283329 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/2.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.322482 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-dtv4t_3552adbd-011f-4552-9e69-233b92c554c8/network-metrics-daemon/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.329180 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-dtv4t_3552adbd-011f-4552-9e69-233b92c554c8/kube-rbac-proxy/0.log" Jan 21 13:02:10 crc kubenswrapper[4881]: I0121 13:02:10.312511 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:02:10 crc kubenswrapper[4881]: E0121 13:02:10.314234 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:02:24 crc kubenswrapper[4881]: I0121 13:02:24.312085 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:02:24 crc kubenswrapper[4881]: E0121 13:02:24.313257 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:02:39 crc kubenswrapper[4881]: I0121 13:02:39.312075 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:02:39 crc kubenswrapper[4881]: E0121 13:02:39.313101 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:02:51 crc kubenswrapper[4881]: I0121 13:02:51.471948 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:02:51 crc kubenswrapper[4881]: E0121 13:02:51.472605 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:03:03 crc kubenswrapper[4881]: I0121 13:03:03.320923 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:03:03 crc kubenswrapper[4881]: E0121 13:03:03.324443 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:03:14 crc kubenswrapper[4881]: I0121 13:03:14.311376 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:03:14 crc kubenswrapper[4881]: E0121 13:03:14.312183 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:03:26 crc kubenswrapper[4881]: I0121 13:03:26.311718 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:03:26 crc kubenswrapper[4881]: E0121 13:03:26.312732 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:03:38 crc kubenswrapper[4881]: I0121 13:03:38.311329 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:03:38 crc kubenswrapper[4881]: E0121 13:03:38.312158 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:03:49 crc kubenswrapper[4881]: I0121 13:03:49.311194 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:03:49 crc kubenswrapper[4881]: E0121 13:03:49.311839 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:04:02 crc kubenswrapper[4881]: I0121 13:04:02.310811 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:04:02 crc kubenswrapper[4881]: E0121 13:04:02.311564 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:04:16 crc kubenswrapper[4881]: I0121 13:04:16.311244 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:04:16 crc kubenswrapper[4881]: E0121 13:04:16.312585 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.163854 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:17 crc kubenswrapper[4881]: E0121 13:04:17.164487 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31661525-070b-49cf-aacb-1c845c697019" containerName="keystone-cron" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.164508 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31661525-070b-49cf-aacb-1c845c697019" containerName="keystone-cron" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.164803 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="31661525-070b-49cf-aacb-1c845c697019" containerName="keystone-cron" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.166857 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.187401 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.328585 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.328677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.328718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff5v7\" (UniqueName: \"kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.430509 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.430644 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.430728 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff5v7\" (UniqueName: \"kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.431196 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.431258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.454887 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff5v7\" (UniqueName: \"kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.499776 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:18 crc kubenswrapper[4881]: I0121 13:04:18.000060 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:18 crc kubenswrapper[4881]: W0121 13:04:18.015537 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbec100bc_3f06_4e9f_92c8_d2150746c720.slice/crio-148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a WatchSource:0}: Error finding container 148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a: Status 404 returned error can't find the container with id 148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a Jan 21 13:04:18 crc kubenswrapper[4881]: I0121 13:04:18.725824 4881 generic.go:334] "Generic (PLEG): container finished" podID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerID="7ba9268affb7b36ede0c95f07ffb37c2eedb4287b3034bd5ca41d251a17b650e" exitCode=0 Jan 21 13:04:18 crc kubenswrapper[4881]: I0121 13:04:18.726056 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerDied","Data":"7ba9268affb7b36ede0c95f07ffb37c2eedb4287b3034bd5ca41d251a17b650e"} Jan 21 13:04:18 crc kubenswrapper[4881]: I0121 13:04:18.729140 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerStarted","Data":"148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a"} Jan 21 13:04:18 crc kubenswrapper[4881]: I0121 13:04:18.729253 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:04:19 crc kubenswrapper[4881]: I0121 13:04:19.746337 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerStarted","Data":"52d6e3407218ada320893735ba478f1369a2a54d0c437542b8c2fab3e35c4b65"} Jan 21 13:04:20 crc kubenswrapper[4881]: I0121 13:04:20.764240 4881 generic.go:334] "Generic (PLEG): container finished" podID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerID="52d6e3407218ada320893735ba478f1369a2a54d0c437542b8c2fab3e35c4b65" exitCode=0 Jan 21 13:04:20 crc kubenswrapper[4881]: I0121 13:04:20.764300 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerDied","Data":"52d6e3407218ada320893735ba478f1369a2a54d0c437542b8c2fab3e35c4b65"} Jan 21 13:04:21 crc kubenswrapper[4881]: I0121 13:04:21.774451 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerStarted","Data":"a613acb4af5b4ff0151733e528bac6fafdfcaaa1c659f0a6b2cc1730debc40e3"} Jan 21 13:04:21 crc kubenswrapper[4881]: I0121 13:04:21.798751 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kbtdr" podStartSLOduration=2.352642249 podStartE2EDuration="4.798732157s" podCreationTimestamp="2026-01-21 13:04:17 +0000 UTC" firstStartedPulling="2026-01-21 13:04:18.72852271 +0000 UTC m=+7645.988479219" lastFinishedPulling="2026-01-21 13:04:21.174612608 +0000 UTC m=+7648.434569127" observedRunningTime="2026-01-21 13:04:21.791923623 +0000 UTC m=+7649.051880092" watchObservedRunningTime="2026-01-21 13:04:21.798732157 +0000 UTC m=+7649.058688626" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.313985 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:04:27 crc kubenswrapper[4881]: E0121 13:04:27.314693 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.500647 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.500737 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.564253 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.913056 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.969644 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:29 crc kubenswrapper[4881]: I0121 13:04:29.885523 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kbtdr" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="registry-server" containerID="cri-o://a613acb4af5b4ff0151733e528bac6fafdfcaaa1c659f0a6b2cc1730debc40e3" gracePeriod=2 Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.904757 4881 generic.go:334] "Generic (PLEG): container finished" podID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerID="a613acb4af5b4ff0151733e528bac6fafdfcaaa1c659f0a6b2cc1730debc40e3" exitCode=0 Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.904880 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerDied","Data":"a613acb4af5b4ff0151733e528bac6fafdfcaaa1c659f0a6b2cc1730debc40e3"} Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.906058 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerDied","Data":"148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a"} Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.906083 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a" Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.967019 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.983773 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities\") pod \"bec100bc-3f06-4e9f-92c8-d2150746c720\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.984059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content\") pod \"bec100bc-3f06-4e9f-92c8-d2150746c720\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.984163 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff5v7\" (UniqueName: \"kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7\") pod \"bec100bc-3f06-4e9f-92c8-d2150746c720\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.985492 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities" (OuterVolumeSpecName: "utilities") pod "bec100bc-3f06-4e9f-92c8-d2150746c720" (UID: "bec100bc-3f06-4e9f-92c8-d2150746c720"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.990018 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7" (OuterVolumeSpecName: "kube-api-access-ff5v7") pod "bec100bc-3f06-4e9f-92c8-d2150746c720" (UID: "bec100bc-3f06-4e9f-92c8-d2150746c720"). InnerVolumeSpecName "kube-api-access-ff5v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.014733 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bec100bc-3f06-4e9f-92c8-d2150746c720" (UID: "bec100bc-3f06-4e9f-92c8-d2150746c720"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.086511 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.086550 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.086563 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff5v7\" (UniqueName: \"kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7\") on node \"crc\" DevicePath \"\"" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.919679 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.961864 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.975079 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:33 crc kubenswrapper[4881]: I0121 13:04:33.353381 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" path="/var/lib/kubelet/pods/bec100bc-3f06-4e9f-92c8-d2150746c720/volumes" Jan 21 13:04:38 crc kubenswrapper[4881]: I0121 13:04:38.312414 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:04:38 crc kubenswrapper[4881]: E0121 13:04:38.313224 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:04:53 crc kubenswrapper[4881]: I0121 13:04:53.324333 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:04:53 crc kubenswrapper[4881]: E0121 13:04:53.325241 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:05:07 crc kubenswrapper[4881]: I0121 13:05:07.311450 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:05:07 crc kubenswrapper[4881]: E0121 13:05:07.312363 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:05:21 crc kubenswrapper[4881]: I0121 13:05:21.311710 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:05:21 crc kubenswrapper[4881]: E0121 13:05:21.312859 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:05:35 crc kubenswrapper[4881]: I0121 13:05:35.312149 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:05:35 crc kubenswrapper[4881]: I0121 13:05:35.343424 4881 scope.go:117] "RemoveContainer" containerID="0818ec9313f2fc50a748108c2a7b4170d06db46eb9b811376ec620220e592ebc" Jan 21 13:05:35 crc kubenswrapper[4881]: I0121 13:05:35.718187 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01"} Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.713090 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:05:46 crc kubenswrapper[4881]: E0121 13:05:46.713987 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="registry-server" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.714003 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="registry-server" Jan 21 13:05:46 crc kubenswrapper[4881]: E0121 13:05:46.714028 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="extract-utilities" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.714035 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="extract-utilities" Jan 21 13:05:46 crc kubenswrapper[4881]: E0121 13:05:46.714053 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="extract-content" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.714063 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="extract-content" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.714287 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="registry-server" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.716108 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.741078 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.759865 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.759929 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8w72\" (UniqueName: \"kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.759967 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.861626 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.861672 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8w72\" (UniqueName: \"kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.861698 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.862460 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.863414 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.889773 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8w72\" (UniqueName: \"kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:47 crc kubenswrapper[4881]: I0121 13:05:47.044345 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:47 crc kubenswrapper[4881]: I0121 13:05:47.590761 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:05:47 crc kubenswrapper[4881]: W0121 13:05:47.593389 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c1f2821_4561_4775_afd7_f995c7794eb9.slice/crio-5912b65cbe84841f73cf1f4bf22a99a12dcb5a75557795c084a467bac35321b7 WatchSource:0}: Error finding container 5912b65cbe84841f73cf1f4bf22a99a12dcb5a75557795c084a467bac35321b7: Status 404 returned error can't find the container with id 5912b65cbe84841f73cf1f4bf22a99a12dcb5a75557795c084a467bac35321b7 Jan 21 13:05:48 crc kubenswrapper[4881]: I0121 13:05:48.005902 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerID="b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4" exitCode=0 Jan 21 13:05:48 crc kubenswrapper[4881]: I0121 13:05:48.006007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerDied","Data":"b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4"} Jan 21 13:05:48 crc kubenswrapper[4881]: I0121 13:05:48.006452 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerStarted","Data":"5912b65cbe84841f73cf1f4bf22a99a12dcb5a75557795c084a467bac35321b7"} Jan 21 13:05:49 crc kubenswrapper[4881]: I0121 13:05:49.021698 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerStarted","Data":"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6"} Jan 21 13:05:50 crc kubenswrapper[4881]: I0121 13:05:50.042938 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerID="71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6" exitCode=0 Jan 21 13:05:50 crc kubenswrapper[4881]: I0121 13:05:50.043023 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerDied","Data":"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6"} Jan 21 13:05:51 crc kubenswrapper[4881]: I0121 13:05:51.053909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerStarted","Data":"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685"} Jan 21 13:05:51 crc kubenswrapper[4881]: I0121 13:05:51.078663 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-klj4j" podStartSLOduration=2.405007554 podStartE2EDuration="5.078636914s" podCreationTimestamp="2026-01-21 13:05:46 +0000 UTC" firstStartedPulling="2026-01-21 13:05:48.008612051 +0000 UTC m=+7735.268568540" lastFinishedPulling="2026-01-21 13:05:50.682241401 +0000 UTC m=+7737.942197900" observedRunningTime="2026-01-21 13:05:51.071732097 +0000 UTC m=+7738.331688576" watchObservedRunningTime="2026-01-21 13:05:51.078636914 +0000 UTC m=+7738.338593393" Jan 21 13:05:57 crc kubenswrapper[4881]: I0121 13:05:57.045766 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:57 crc kubenswrapper[4881]: I0121 13:05:57.046340 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:57 crc kubenswrapper[4881]: I0121 13:05:57.119223 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:57 crc kubenswrapper[4881]: I0121 13:05:57.327550 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:57 crc kubenswrapper[4881]: I0121 13:05:57.389231 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:05:59 crc kubenswrapper[4881]: I0121 13:05:59.349337 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-klj4j" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="registry-server" containerID="cri-o://f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685" gracePeriod=2 Jan 21 13:05:59 crc kubenswrapper[4881]: I0121 13:05:59.930392 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.057462 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content\") pod \"1c1f2821-4561-4775-afd7-f995c7794eb9\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.057710 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8w72\" (UniqueName: \"kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72\") pod \"1c1f2821-4561-4775-afd7-f995c7794eb9\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.057730 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities\") pod \"1c1f2821-4561-4775-afd7-f995c7794eb9\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.059146 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities" (OuterVolumeSpecName: "utilities") pod "1c1f2821-4561-4775-afd7-f995c7794eb9" (UID: "1c1f2821-4561-4775-afd7-f995c7794eb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.064538 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72" (OuterVolumeSpecName: "kube-api-access-x8w72") pod "1c1f2821-4561-4775-afd7-f995c7794eb9" (UID: "1c1f2821-4561-4775-afd7-f995c7794eb9"). InnerVolumeSpecName "kube-api-access-x8w72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.107008 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c1f2821-4561-4775-afd7-f995c7794eb9" (UID: "1c1f2821-4561-4775-afd7-f995c7794eb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.160764 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.160833 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8w72\" (UniqueName: \"kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.160850 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.365754 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerID="f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685" exitCode=0 Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.365814 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerDied","Data":"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685"} Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.365855 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerDied","Data":"5912b65cbe84841f73cf1f4bf22a99a12dcb5a75557795c084a467bac35321b7"} Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.365876 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.365901 4881 scope.go:117] "RemoveContainer" containerID="f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.410099 4881 scope.go:117] "RemoveContainer" containerID="71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.421201 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.430407 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.440523 4881 scope.go:117] "RemoveContainer" containerID="b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.506960 4881 scope.go:117] "RemoveContainer" containerID="f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685" Jan 21 13:06:00 crc kubenswrapper[4881]: E0121 13:06:00.507737 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685\": container with ID starting with f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685 not found: ID does not exist" containerID="f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.507833 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685"} err="failed to get container status \"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685\": rpc error: code = NotFound desc = could not find container \"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685\": container with ID starting with f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685 not found: ID does not exist" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.507867 4881 scope.go:117] "RemoveContainer" containerID="71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6" Jan 21 13:06:00 crc kubenswrapper[4881]: E0121 13:06:00.508406 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6\": container with ID starting with 71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6 not found: ID does not exist" containerID="71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.508437 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6"} err="failed to get container status \"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6\": rpc error: code = NotFound desc = could not find container \"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6\": container with ID starting with 71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6 not found: ID does not exist" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.508458 4881 scope.go:117] "RemoveContainer" containerID="b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4" Jan 21 13:06:00 crc kubenswrapper[4881]: E0121 13:06:00.508933 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4\": container with ID starting with b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4 not found: ID does not exist" containerID="b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.508959 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4"} err="failed to get container status \"b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4\": rpc error: code = NotFound desc = could not find container \"b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4\": container with ID starting with b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4 not found: ID does not exist" Jan 21 13:06:01 crc kubenswrapper[4881]: I0121 13:06:01.339270 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" path="/var/lib/kubelet/pods/1c1f2821-4561-4775-afd7-f995c7794eb9/volumes" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.266811 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:26 crc kubenswrapper[4881]: E0121 13:06:26.267763 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="extract-utilities" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.267785 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="extract-utilities" Jan 21 13:06:26 crc kubenswrapper[4881]: E0121 13:06:26.267825 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="extract-content" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.267833 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="extract-content" Jan 21 13:06:26 crc kubenswrapper[4881]: E0121 13:06:26.267861 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="registry-server" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.267871 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="registry-server" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.268146 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="registry-server" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.270021 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.289201 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.298600 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxrm\" (UniqueName: \"kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.298889 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.299636 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.401230 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxxrm\" (UniqueName: \"kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.401728 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.402042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.402529 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.402789 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.423889 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxxrm\" (UniqueName: \"kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.644851 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:27 crc kubenswrapper[4881]: I0121 13:06:27.186207 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:27 crc kubenswrapper[4881]: I0121 13:06:27.790176 4881 generic.go:334] "Generic (PLEG): container finished" podID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerID="5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8" exitCode=0 Jan 21 13:06:27 crc kubenswrapper[4881]: I0121 13:06:27.790424 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerDied","Data":"5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8"} Jan 21 13:06:27 crc kubenswrapper[4881]: I0121 13:06:27.790909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerStarted","Data":"13d74233d2fee10bbf68c00871b803fb4c61e118c339bfc524797906efc7d658"} Jan 21 13:06:29 crc kubenswrapper[4881]: I0121 13:06:29.886502 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerStarted","Data":"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df"} Jan 21 13:06:30 crc kubenswrapper[4881]: I0121 13:06:30.902218 4881 generic.go:334] "Generic (PLEG): container finished" podID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerID="d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df" exitCode=0 Jan 21 13:06:30 crc kubenswrapper[4881]: I0121 13:06:30.902267 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerDied","Data":"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df"} Jan 21 13:06:30 crc kubenswrapper[4881]: I0121 13:06:30.902732 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerStarted","Data":"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926"} Jan 21 13:06:30 crc kubenswrapper[4881]: I0121 13:06:30.935988 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gss9b" podStartSLOduration=2.397782226 podStartE2EDuration="4.935965455s" podCreationTimestamp="2026-01-21 13:06:26 +0000 UTC" firstStartedPulling="2026-01-21 13:06:27.792633796 +0000 UTC m=+7775.052590265" lastFinishedPulling="2026-01-21 13:06:30.330816985 +0000 UTC m=+7777.590773494" observedRunningTime="2026-01-21 13:06:30.924361344 +0000 UTC m=+7778.184317843" watchObservedRunningTime="2026-01-21 13:06:30.935965455 +0000 UTC m=+7778.195921924" Jan 21 13:06:35 crc kubenswrapper[4881]: I0121 13:06:35.440250 4881 scope.go:117] "RemoveContainer" containerID="adc0b5280c47db093a6ec180a9e5726fbeb5b4a901615e6f06978e816e37c4a2" Jan 21 13:06:36 crc kubenswrapper[4881]: I0121 13:06:36.645112 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:36 crc kubenswrapper[4881]: I0121 13:06:36.645447 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:36 crc kubenswrapper[4881]: I0121 13:06:36.726202 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:37 crc kubenswrapper[4881]: I0121 13:06:37.036762 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:37 crc kubenswrapper[4881]: I0121 13:06:37.098972 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.004185 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gss9b" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="registry-server" containerID="cri-o://6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926" gracePeriod=2 Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.534934 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.726077 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxxrm\" (UniqueName: \"kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm\") pod \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.726309 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities\") pod \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.726356 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content\") pod \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.727449 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities" (OuterVolumeSpecName: "utilities") pod "442f5627-e1c1-4ccc-9b75-c011f432c2a8" (UID: "442f5627-e1c1-4ccc-9b75-c011f432c2a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.739759 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm" (OuterVolumeSpecName: "kube-api-access-jxxrm") pod "442f5627-e1c1-4ccc-9b75-c011f432c2a8" (UID: "442f5627-e1c1-4ccc-9b75-c011f432c2a8"). InnerVolumeSpecName "kube-api-access-jxxrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.829069 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.829114 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxxrm\" (UniqueName: \"kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.888944 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "442f5627-e1c1-4ccc-9b75-c011f432c2a8" (UID: "442f5627-e1c1-4ccc-9b75-c011f432c2a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.931115 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.016155 4881 generic.go:334] "Generic (PLEG): container finished" podID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerID="6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926" exitCode=0 Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.016213 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerDied","Data":"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926"} Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.016245 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerDied","Data":"13d74233d2fee10bbf68c00871b803fb4c61e118c339bfc524797906efc7d658"} Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.016251 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.016264 4881 scope.go:117] "RemoveContainer" containerID="6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.047441 4881 scope.go:117] "RemoveContainer" containerID="d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.091213 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.094460 4881 scope.go:117] "RemoveContainer" containerID="5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.107399 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.137407 4881 scope.go:117] "RemoveContainer" containerID="6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926" Jan 21 13:06:40 crc kubenswrapper[4881]: E0121 13:06:40.137873 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926\": container with ID starting with 6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926 not found: ID does not exist" containerID="6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.137903 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926"} err="failed to get container status \"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926\": rpc error: code = NotFound desc = could not find container \"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926\": container with ID starting with 6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926 not found: ID does not exist" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.137925 4881 scope.go:117] "RemoveContainer" containerID="d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df" Jan 21 13:06:40 crc kubenswrapper[4881]: E0121 13:06:40.138194 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df\": container with ID starting with d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df not found: ID does not exist" containerID="d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.138213 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df"} err="failed to get container status \"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df\": rpc error: code = NotFound desc = could not find container \"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df\": container with ID starting with d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df not found: ID does not exist" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.138226 4881 scope.go:117] "RemoveContainer" containerID="5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8" Jan 21 13:06:40 crc kubenswrapper[4881]: E0121 13:06:40.138389 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8\": container with ID starting with 5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8 not found: ID does not exist" containerID="5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.138402 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8"} err="failed to get container status \"5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8\": rpc error: code = NotFound desc = could not find container \"5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8\": container with ID starting with 5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8 not found: ID does not exist" Jan 21 13:06:41 crc kubenswrapper[4881]: I0121 13:06:41.323591 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" path="/var/lib/kubelet/pods/442f5627-e1c1-4ccc-9b75-c011f432c2a8/volumes" Jan 21 13:07:59 crc kubenswrapper[4881]: I0121 13:07:59.851541 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:07:59 crc kubenswrapper[4881]: I0121 13:07:59.852051 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:08:29 crc kubenswrapper[4881]: I0121 13:08:29.851856 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:08:29 crc kubenswrapper[4881]: I0121 13:08:29.852706 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:08:59 crc kubenswrapper[4881]: I0121 13:08:59.851360 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:08:59 crc kubenswrapper[4881]: I0121 13:08:59.851974 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:08:59 crc kubenswrapper[4881]: I0121 13:08:59.852063 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:08:59 crc kubenswrapper[4881]: I0121 13:08:59.853180 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:08:59 crc kubenswrapper[4881]: I0121 13:08:59.853276 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01" gracePeriod=600 Jan 21 13:09:00 crc kubenswrapper[4881]: I0121 13:09:00.470307 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01" exitCode=0 Jan 21 13:09:00 crc kubenswrapper[4881]: I0121 13:09:00.470371 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01"} Jan 21 13:09:00 crc kubenswrapper[4881]: I0121 13:09:00.471171 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a"} Jan 21 13:09:00 crc kubenswrapper[4881]: I0121 13:09:00.471235 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.826405 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:09:35 crc kubenswrapper[4881]: E0121 13:09:35.827583 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="extract-utilities" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.827616 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="extract-utilities" Jan 21 13:09:35 crc kubenswrapper[4881]: E0121 13:09:35.827644 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="extract-content" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.827652 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="extract-content" Jan 21 13:09:35 crc kubenswrapper[4881]: E0121 13:09:35.827663 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="registry-server" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.827671 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="registry-server" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.827963 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="registry-server" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.831595 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.848416 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.981462 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htw9q\" (UniqueName: \"kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.981664 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.981943 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.084637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.084748 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htw9q\" (UniqueName: \"kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.084818 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.085317 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.085474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.113386 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htw9q\" (UniqueName: \"kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.157085 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.651257 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.912851 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerStarted","Data":"6fe7338bc95ad2647c2843d63b62e9c74936582099c757697291e6aa090f1c82"} Jan 21 13:09:37 crc kubenswrapper[4881]: I0121 13:09:37.931467 4881 generic.go:334] "Generic (PLEG): container finished" podID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerID="e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831" exitCode=0 Jan 21 13:09:37 crc kubenswrapper[4881]: I0121 13:09:37.931733 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerDied","Data":"e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831"} Jan 21 13:09:37 crc kubenswrapper[4881]: I0121 13:09:37.936544 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:09:40 crc kubenswrapper[4881]: I0121 13:09:40.970857 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerStarted","Data":"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2"} Jan 21 13:09:48 crc kubenswrapper[4881]: I0121 13:09:48.091538 4881 generic.go:334] "Generic (PLEG): container finished" podID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerID="dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2" exitCode=0 Jan 21 13:09:48 crc kubenswrapper[4881]: I0121 13:09:48.091976 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerDied","Data":"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2"} Jan 21 13:09:52 crc kubenswrapper[4881]: I0121 13:09:52.133363 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerStarted","Data":"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0"} Jan 21 13:09:52 crc kubenswrapper[4881]: I0121 13:09:52.160153 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tz87r" podStartSLOduration=4.592580994 podStartE2EDuration="17.160099374s" podCreationTimestamp="2026-01-21 13:09:35 +0000 UTC" firstStartedPulling="2026-01-21 13:09:37.936069784 +0000 UTC m=+7965.196026263" lastFinishedPulling="2026-01-21 13:09:50.503588174 +0000 UTC m=+7977.763544643" observedRunningTime="2026-01-21 13:09:52.154919148 +0000 UTC m=+7979.414875637" watchObservedRunningTime="2026-01-21 13:09:52.160099374 +0000 UTC m=+7979.420055853" Jan 21 13:09:56 crc kubenswrapper[4881]: I0121 13:09:56.158170 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:56 crc kubenswrapper[4881]: I0121 13:09:56.161343 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:57 crc kubenswrapper[4881]: I0121 13:09:57.239618 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tz87r" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="registry-server" probeResult="failure" output=< Jan 21 13:09:57 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 13:09:57 crc kubenswrapper[4881]: > Jan 21 13:10:06 crc kubenswrapper[4881]: I0121 13:10:06.210327 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:10:06 crc kubenswrapper[4881]: I0121 13:10:06.269877 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:10:07 crc kubenswrapper[4881]: I0121 13:10:07.032871 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:10:07 crc kubenswrapper[4881]: I0121 13:10:07.299082 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tz87r" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="registry-server" containerID="cri-o://7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0" gracePeriod=2 Jan 21 13:10:07 crc kubenswrapper[4881]: I0121 13:10:07.818465 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.120518 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content\") pod \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.120739 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htw9q\" (UniqueName: \"kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q\") pod \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.120820 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities\") pod \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.122571 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities" (OuterVolumeSpecName: "utilities") pod "7a26c7f3-1ab1-4718-b38e-e7312fe50035" (UID: "7a26c7f3-1ab1-4718-b38e-e7312fe50035"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.135752 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q" (OuterVolumeSpecName: "kube-api-access-htw9q") pod "7a26c7f3-1ab1-4718-b38e-e7312fe50035" (UID: "7a26c7f3-1ab1-4718-b38e-e7312fe50035"). InnerVolumeSpecName "kube-api-access-htw9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.223514 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htw9q\" (UniqueName: \"kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q\") on node \"crc\" DevicePath \"\"" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.223543 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.238678 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a26c7f3-1ab1-4718-b38e-e7312fe50035" (UID: "7a26c7f3-1ab1-4718-b38e-e7312fe50035"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.316011 4881 generic.go:334] "Generic (PLEG): container finished" podID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerID="7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0" exitCode=0 Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.316068 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerDied","Data":"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0"} Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.316149 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerDied","Data":"6fe7338bc95ad2647c2843d63b62e9c74936582099c757697291e6aa090f1c82"} Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.316177 4881 scope.go:117] "RemoveContainer" containerID="7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.318036 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.326200 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.348871 4881 scope.go:117] "RemoveContainer" containerID="dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.369701 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.385599 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.395338 4881 scope.go:117] "RemoveContainer" containerID="e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.436594 4881 scope.go:117] "RemoveContainer" containerID="7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0" Jan 21 13:10:08 crc kubenswrapper[4881]: E0121 13:10:08.437086 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0\": container with ID starting with 7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0 not found: ID does not exist" containerID="7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.437133 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0"} err="failed to get container status \"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0\": rpc error: code = NotFound desc = could not find container \"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0\": container with ID starting with 7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0 not found: ID does not exist" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.437157 4881 scope.go:117] "RemoveContainer" containerID="dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2" Jan 21 13:10:08 crc kubenswrapper[4881]: E0121 13:10:08.437345 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2\": container with ID starting with dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2 not found: ID does not exist" containerID="dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.437367 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2"} err="failed to get container status \"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2\": rpc error: code = NotFound desc = could not find container \"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2\": container with ID starting with dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2 not found: ID does not exist" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.437379 4881 scope.go:117] "RemoveContainer" containerID="e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831" Jan 21 13:10:08 crc kubenswrapper[4881]: E0121 13:10:08.437567 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831\": container with ID starting with e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831 not found: ID does not exist" containerID="e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.437593 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831"} err="failed to get container status \"e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831\": rpc error: code = NotFound desc = could not find container \"e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831\": container with ID starting with e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831 not found: ID does not exist" Jan 21 13:10:09 crc kubenswrapper[4881]: I0121 13:10:09.328712 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" path="/var/lib/kubelet/pods/7a26c7f3-1ab1-4718-b38e-e7312fe50035/volumes" Jan 21 13:10:35 crc kubenswrapper[4881]: I0121 13:10:35.643064 4881 scope.go:117] "RemoveContainer" containerID="52d6e3407218ada320893735ba478f1369a2a54d0c437542b8c2fab3e35c4b65" Jan 21 13:10:35 crc kubenswrapper[4881]: I0121 13:10:35.697063 4881 scope.go:117] "RemoveContainer" containerID="7ba9268affb7b36ede0c95f07ffb37c2eedb4287b3034bd5ca41d251a17b650e" Jan 21 13:10:35 crc kubenswrapper[4881]: I0121 13:10:35.775572 4881 scope.go:117] "RemoveContainer" containerID="a613acb4af5b4ff0151733e528bac6fafdfcaaa1c659f0a6b2cc1730debc40e3" Jan 21 13:11:29 crc kubenswrapper[4881]: I0121 13:11:29.851330 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:11:29 crc kubenswrapper[4881]: I0121 13:11:29.852060 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:11:59 crc kubenswrapper[4881]: I0121 13:11:59.851314 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:11:59 crc kubenswrapper[4881]: I0121 13:11:59.851831 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:12:29 crc kubenswrapper[4881]: I0121 13:12:29.851424 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:12:29 crc kubenswrapper[4881]: I0121 13:12:29.853070 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:12:29 crc kubenswrapper[4881]: I0121 13:12:29.853334 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:12:29 crc kubenswrapper[4881]: I0121 13:12:29.854434 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:12:29 crc kubenswrapper[4881]: I0121 13:12:29.854624 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" gracePeriod=600 Jan 21 13:12:29 crc kubenswrapper[4881]: E0121 13:12:29.990246 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:12:30 crc kubenswrapper[4881]: I0121 13:12:30.242713 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" exitCode=0 Jan 21 13:12:30 crc kubenswrapper[4881]: I0121 13:12:30.242763 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a"} Jan 21 13:12:30 crc kubenswrapper[4881]: I0121 13:12:30.242820 4881 scope.go:117] "RemoveContainer" containerID="982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01" Jan 21 13:12:30 crc kubenswrapper[4881]: I0121 13:12:30.243670 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:12:30 crc kubenswrapper[4881]: E0121 13:12:30.244065 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:12:41 crc kubenswrapper[4881]: I0121 13:12:41.312037 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:12:41 crc kubenswrapper[4881]: E0121 13:12:41.312891 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:12:56 crc kubenswrapper[4881]: I0121 13:12:56.310680 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:12:56 crc kubenswrapper[4881]: E0121 13:12:56.313741 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:13:09 crc kubenswrapper[4881]: I0121 13:13:09.312401 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:13:09 crc kubenswrapper[4881]: E0121 13:13:09.313366 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:13:21 crc kubenswrapper[4881]: I0121 13:13:21.311099 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:13:21 crc kubenswrapper[4881]: E0121 13:13:21.311806 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:13:32 crc kubenswrapper[4881]: I0121 13:13:32.312519 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:13:32 crc kubenswrapper[4881]: E0121 13:13:32.313476 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:13:44 crc kubenswrapper[4881]: I0121 13:13:44.313871 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:13:44 crc kubenswrapper[4881]: E0121 13:13:44.314497 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:13:56 crc kubenswrapper[4881]: I0121 13:13:56.310986 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:13:56 crc kubenswrapper[4881]: E0121 13:13:56.312163 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:14:09 crc kubenswrapper[4881]: I0121 13:14:09.312452 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:14:09 crc kubenswrapper[4881]: E0121 13:14:09.313348 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:14:21 crc kubenswrapper[4881]: I0121 13:14:21.311115 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:14:21 crc kubenswrapper[4881]: E0121 13:14:21.312204 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:14:33 crc kubenswrapper[4881]: I0121 13:14:33.323509 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:14:33 crc kubenswrapper[4881]: E0121 13:14:33.324856 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:14:46 crc kubenswrapper[4881]: I0121 13:14:46.311585 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:14:46 crc kubenswrapper[4881]: E0121 13:14:46.312959 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:14:59 crc kubenswrapper[4881]: I0121 13:14:59.310707 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:14:59 crc kubenswrapper[4881]: E0121 13:14:59.311545 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.202926 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq"] Jan 21 13:15:00 crc kubenswrapper[4881]: E0121 13:15:00.203860 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="registry-server" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.203885 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="registry-server" Jan 21 13:15:00 crc kubenswrapper[4881]: E0121 13:15:00.203916 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="extract-utilities" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.203923 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="extract-utilities" Jan 21 13:15:00 crc kubenswrapper[4881]: E0121 13:15:00.203935 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="extract-content" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.203940 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="extract-content" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.204149 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="registry-server" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.204971 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.207920 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.213800 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.230722 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq"] Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.284959 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.285021 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9q2q\" (UniqueName: \"kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.285643 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.387579 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.387676 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.387696 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9q2q\" (UniqueName: \"kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.389149 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.402758 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.417528 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9q2q\" (UniqueName: \"kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.523154 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:01 crc kubenswrapper[4881]: I0121 13:15:01.019143 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq"] Jan 21 13:15:01 crc kubenswrapper[4881]: I0121 13:15:01.119083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" event={"ID":"f55151a2-6511-456d-b38a-be9f5a21c93c","Type":"ContainerStarted","Data":"78a03f2d314a3b74edc7187f8196a6192aa8a9d1e02cd6b8dc0699796d7cd89d"} Jan 21 13:15:02 crc kubenswrapper[4881]: I0121 13:15:02.137241 4881 generic.go:334] "Generic (PLEG): container finished" podID="f55151a2-6511-456d-b38a-be9f5a21c93c" containerID="e0dd23d233b9caa539382c8a1564b0d40bb269edd2ad3466941af737a67501dd" exitCode=0 Jan 21 13:15:02 crc kubenswrapper[4881]: I0121 13:15:02.137306 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" event={"ID":"f55151a2-6511-456d-b38a-be9f5a21c93c","Type":"ContainerDied","Data":"e0dd23d233b9caa539382c8a1564b0d40bb269edd2ad3466941af737a67501dd"} Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.538431 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.674081 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume\") pod \"f55151a2-6511-456d-b38a-be9f5a21c93c\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.674508 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume\") pod \"f55151a2-6511-456d-b38a-be9f5a21c93c\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.674610 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9q2q\" (UniqueName: \"kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q\") pod \"f55151a2-6511-456d-b38a-be9f5a21c93c\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.674980 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume" (OuterVolumeSpecName: "config-volume") pod "f55151a2-6511-456d-b38a-be9f5a21c93c" (UID: "f55151a2-6511-456d-b38a-be9f5a21c93c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.675212 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.687395 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q" (OuterVolumeSpecName: "kube-api-access-r9q2q") pod "f55151a2-6511-456d-b38a-be9f5a21c93c" (UID: "f55151a2-6511-456d-b38a-be9f5a21c93c"). InnerVolumeSpecName "kube-api-access-r9q2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.688134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f55151a2-6511-456d-b38a-be9f5a21c93c" (UID: "f55151a2-6511-456d-b38a-be9f5a21c93c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.777897 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.777947 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9q2q\" (UniqueName: \"kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:04 crc kubenswrapper[4881]: I0121 13:15:04.167731 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" event={"ID":"f55151a2-6511-456d-b38a-be9f5a21c93c","Type":"ContainerDied","Data":"78a03f2d314a3b74edc7187f8196a6192aa8a9d1e02cd6b8dc0699796d7cd89d"} Jan 21 13:15:04 crc kubenswrapper[4881]: I0121 13:15:04.167767 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78a03f2d314a3b74edc7187f8196a6192aa8a9d1e02cd6b8dc0699796d7cd89d" Jan 21 13:15:04 crc kubenswrapper[4881]: I0121 13:15:04.167824 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:04 crc kubenswrapper[4881]: I0121 13:15:04.639739 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g"] Jan 21 13:15:04 crc kubenswrapper[4881]: I0121 13:15:04.651426 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g"] Jan 21 13:15:05 crc kubenswrapper[4881]: I0121 13:15:05.330479 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5368d7c4-a23a-46aa-8dea-1fde26f5df53" path="/var/lib/kubelet/pods/5368d7c4-a23a-46aa-8dea-1fde26f5df53/volumes" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.422661 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:07 crc kubenswrapper[4881]: E0121 13:15:07.425677 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55151a2-6511-456d-b38a-be9f5a21c93c" containerName="collect-profiles" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.425707 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55151a2-6511-456d-b38a-be9f5a21c93c" containerName="collect-profiles" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.426070 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55151a2-6511-456d-b38a-be9f5a21c93c" containerName="collect-profiles" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.429136 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.442084 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.478338 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.478391 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.478497 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcpql\" (UniqueName: \"kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.580899 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcpql\" (UniqueName: \"kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.581330 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.581366 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.581842 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.581946 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.613975 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcpql\" (UniqueName: \"kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.756473 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:08 crc kubenswrapper[4881]: I0121 13:15:08.321922 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:09 crc kubenswrapper[4881]: I0121 13:15:09.225863 4881 generic.go:334] "Generic (PLEG): container finished" podID="cc894132-ff81-4462-808c-04b91aa131c5" containerID="1f0cf2aba23d64564f86d3e47e178b26c66b88713e2c1b4e63ada03ff3001e47" exitCode=0 Jan 21 13:15:09 crc kubenswrapper[4881]: I0121 13:15:09.225920 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerDied","Data":"1f0cf2aba23d64564f86d3e47e178b26c66b88713e2c1b4e63ada03ff3001e47"} Jan 21 13:15:09 crc kubenswrapper[4881]: I0121 13:15:09.230700 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerStarted","Data":"8051ab9dc5d632a0547e190564f726d71a1e7a469f81499a6307d3d35f95846e"} Jan 21 13:15:09 crc kubenswrapper[4881]: I0121 13:15:09.228925 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:15:10 crc kubenswrapper[4881]: I0121 13:15:10.310581 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:15:10 crc kubenswrapper[4881]: E0121 13:15:10.311966 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:15:11 crc kubenswrapper[4881]: I0121 13:15:11.261489 4881 generic.go:334] "Generic (PLEG): container finished" podID="cc894132-ff81-4462-808c-04b91aa131c5" containerID="7905ef1bd8eb4c2a74ecd66dee0f7a7d01738c48ab72e0bfb49efb8ba199940b" exitCode=0 Jan 21 13:15:11 crc kubenswrapper[4881]: I0121 13:15:11.261561 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerDied","Data":"7905ef1bd8eb4c2a74ecd66dee0f7a7d01738c48ab72e0bfb49efb8ba199940b"} Jan 21 13:15:12 crc kubenswrapper[4881]: I0121 13:15:12.272402 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerStarted","Data":"b2480cdd412677da34ca1262943186b4f02a412993e268c2cc5a3c46d5441e61"} Jan 21 13:15:12 crc kubenswrapper[4881]: I0121 13:15:12.321137 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9zbwp" podStartSLOduration=2.715422987 podStartE2EDuration="5.321106479s" podCreationTimestamp="2026-01-21 13:15:07 +0000 UTC" firstStartedPulling="2026-01-21 13:15:09.228582772 +0000 UTC m=+8296.488539261" lastFinishedPulling="2026-01-21 13:15:11.834266244 +0000 UTC m=+8299.094222753" observedRunningTime="2026-01-21 13:15:12.301390098 +0000 UTC m=+8299.561346577" watchObservedRunningTime="2026-01-21 13:15:12.321106479 +0000 UTC m=+8299.581062948" Jan 21 13:15:17 crc kubenswrapper[4881]: I0121 13:15:17.757021 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:17 crc kubenswrapper[4881]: I0121 13:15:17.759892 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:17 crc kubenswrapper[4881]: I0121 13:15:17.813297 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:18 crc kubenswrapper[4881]: I0121 13:15:18.402717 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:19 crc kubenswrapper[4881]: I0121 13:15:19.590833 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:21 crc kubenswrapper[4881]: I0121 13:15:21.421860 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9zbwp" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="registry-server" containerID="cri-o://b2480cdd412677da34ca1262943186b4f02a412993e268c2cc5a3c46d5441e61" gracePeriod=2 Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.311217 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:15:22 crc kubenswrapper[4881]: E0121 13:15:22.312166 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.434055 4881 generic.go:334] "Generic (PLEG): container finished" podID="cc894132-ff81-4462-808c-04b91aa131c5" containerID="b2480cdd412677da34ca1262943186b4f02a412993e268c2cc5a3c46d5441e61" exitCode=0 Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.434109 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerDied","Data":"b2480cdd412677da34ca1262943186b4f02a412993e268c2cc5a3c46d5441e61"} Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.434175 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerDied","Data":"8051ab9dc5d632a0547e190564f726d71a1e7a469f81499a6307d3d35f95846e"} Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.434195 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8051ab9dc5d632a0547e190564f726d71a1e7a469f81499a6307d3d35f95846e" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.493603 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.683931 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities\") pod \"cc894132-ff81-4462-808c-04b91aa131c5\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.684159 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content\") pod \"cc894132-ff81-4462-808c-04b91aa131c5\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.684244 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcpql\" (UniqueName: \"kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql\") pod \"cc894132-ff81-4462-808c-04b91aa131c5\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.692134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities" (OuterVolumeSpecName: "utilities") pod "cc894132-ff81-4462-808c-04b91aa131c5" (UID: "cc894132-ff81-4462-808c-04b91aa131c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.702371 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql" (OuterVolumeSpecName: "kube-api-access-vcpql") pod "cc894132-ff81-4462-808c-04b91aa131c5" (UID: "cc894132-ff81-4462-808c-04b91aa131c5"). InnerVolumeSpecName "kube-api-access-vcpql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.714399 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc894132-ff81-4462-808c-04b91aa131c5" (UID: "cc894132-ff81-4462-808c-04b91aa131c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.787251 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.788338 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.788404 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcpql\" (UniqueName: \"kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4881]: I0121 13:15:23.444071 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:23 crc kubenswrapper[4881]: I0121 13:15:23.470521 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:23 crc kubenswrapper[4881]: I0121 13:15:23.481073 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:25 crc kubenswrapper[4881]: I0121 13:15:25.327601 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc894132-ff81-4462-808c-04b91aa131c5" path="/var/lib/kubelet/pods/cc894132-ff81-4462-808c-04b91aa131c5/volumes" Jan 21 13:15:34 crc kubenswrapper[4881]: I0121 13:15:34.310935 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:15:34 crc kubenswrapper[4881]: E0121 13:15:34.311747 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:15:35 crc kubenswrapper[4881]: I0121 13:15:35.955444 4881 scope.go:117] "RemoveContainer" containerID="b60782b6ad5aeb71531d28ab48543fd988c6726bf0975c069d2238cd6237f3ab" Jan 21 13:15:48 crc kubenswrapper[4881]: I0121 13:15:48.311845 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:15:48 crc kubenswrapper[4881]: E0121 13:15:48.312729 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:16:00 crc kubenswrapper[4881]: I0121 13:16:00.311291 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:16:00 crc kubenswrapper[4881]: E0121 13:16:00.312112 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:16:15 crc kubenswrapper[4881]: I0121 13:16:15.311603 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:16:15 crc kubenswrapper[4881]: E0121 13:16:15.312510 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.408924 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:25 crc kubenswrapper[4881]: E0121 13:16:25.411673 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="registry-server" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.411701 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="registry-server" Jan 21 13:16:25 crc kubenswrapper[4881]: E0121 13:16:25.411837 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="extract-utilities" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.411851 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="extract-utilities" Jan 21 13:16:25 crc kubenswrapper[4881]: E0121 13:16:25.411906 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="extract-content" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.411924 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="extract-content" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.412428 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="registry-server" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.414818 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.418604 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.418670 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cnwb\" (UniqueName: \"kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.418723 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.441235 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.521871 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.521927 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cnwb\" (UniqueName: \"kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.521965 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.522454 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.522728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.557434 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cnwb\" (UniqueName: \"kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.749406 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:26 crc kubenswrapper[4881]: I0121 13:16:26.308136 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:26 crc kubenswrapper[4881]: I0121 13:16:26.313048 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:16:26 crc kubenswrapper[4881]: E0121 13:16:26.313400 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:16:27 crc kubenswrapper[4881]: I0121 13:16:27.237513 4881 generic.go:334] "Generic (PLEG): container finished" podID="66383caa-595c-4dad-b9a9-a2878ef04277" containerID="888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7" exitCode=0 Jan 21 13:16:27 crc kubenswrapper[4881]: I0121 13:16:27.238055 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerDied","Data":"888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7"} Jan 21 13:16:27 crc kubenswrapper[4881]: I0121 13:16:27.238632 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerStarted","Data":"fa71987c36f90575b883cf28f8ac5bdfa3d896fa89f6a90865690b81487ced82"} Jan 21 13:16:30 crc kubenswrapper[4881]: I0121 13:16:30.295380 4881 generic.go:334] "Generic (PLEG): container finished" podID="66383caa-595c-4dad-b9a9-a2878ef04277" containerID="bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f" exitCode=0 Jan 21 13:16:30 crc kubenswrapper[4881]: I0121 13:16:30.295945 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerDied","Data":"bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f"} Jan 21 13:16:33 crc kubenswrapper[4881]: I0121 13:16:33.354224 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerStarted","Data":"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495"} Jan 21 13:16:33 crc kubenswrapper[4881]: I0121 13:16:33.383463 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9zw6q" podStartSLOduration=3.335513833 podStartE2EDuration="8.383429028s" podCreationTimestamp="2026-01-21 13:16:25 +0000 UTC" firstStartedPulling="2026-01-21 13:16:27.244560593 +0000 UTC m=+8374.504517062" lastFinishedPulling="2026-01-21 13:16:32.292475788 +0000 UTC m=+8379.552432257" observedRunningTime="2026-01-21 13:16:33.37237841 +0000 UTC m=+8380.632334889" watchObservedRunningTime="2026-01-21 13:16:33.383429028 +0000 UTC m=+8380.643385527" Jan 21 13:16:35 crc kubenswrapper[4881]: I0121 13:16:35.750183 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:35 crc kubenswrapper[4881]: I0121 13:16:35.750633 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:35 crc kubenswrapper[4881]: I0121 13:16:35.831501 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:37 crc kubenswrapper[4881]: I0121 13:16:37.311841 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:16:37 crc kubenswrapper[4881]: E0121 13:16:37.312743 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:16:46 crc kubenswrapper[4881]: I0121 13:16:46.073425 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:46 crc kubenswrapper[4881]: I0121 13:16:46.196167 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:46 crc kubenswrapper[4881]: I0121 13:16:46.730768 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9zw6q" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="registry-server" containerID="cri-o://d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495" gracePeriod=2 Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.291686 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.469710 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cnwb\" (UniqueName: \"kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb\") pod \"66383caa-595c-4dad-b9a9-a2878ef04277\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.469766 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities\") pod \"66383caa-595c-4dad-b9a9-a2878ef04277\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.469988 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content\") pod \"66383caa-595c-4dad-b9a9-a2878ef04277\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.471905 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities" (OuterVolumeSpecName: "utilities") pod "66383caa-595c-4dad-b9a9-a2878ef04277" (UID: "66383caa-595c-4dad-b9a9-a2878ef04277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.496859 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb" (OuterVolumeSpecName: "kube-api-access-5cnwb") pod "66383caa-595c-4dad-b9a9-a2878ef04277" (UID: "66383caa-595c-4dad-b9a9-a2878ef04277"). InnerVolumeSpecName "kube-api-access-5cnwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.519616 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66383caa-595c-4dad-b9a9-a2878ef04277" (UID: "66383caa-595c-4dad-b9a9-a2878ef04277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.572233 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.572263 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cnwb\" (UniqueName: \"kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.572276 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.746497 4881 generic.go:334] "Generic (PLEG): container finished" podID="66383caa-595c-4dad-b9a9-a2878ef04277" containerID="d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495" exitCode=0 Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.746558 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerDied","Data":"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495"} Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.746594 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerDied","Data":"fa71987c36f90575b883cf28f8ac5bdfa3d896fa89f6a90865690b81487ced82"} Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.746616 4881 scope.go:117] "RemoveContainer" containerID="d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.746813 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.793373 4881 scope.go:117] "RemoveContainer" containerID="bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.815043 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.817232 4881 scope.go:117] "RemoveContainer" containerID="888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.822892 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.883404 4881 scope.go:117] "RemoveContainer" containerID="d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495" Jan 21 13:16:47 crc kubenswrapper[4881]: E0121 13:16:47.886910 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495\": container with ID starting with d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495 not found: ID does not exist" containerID="d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.887117 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495"} err="failed to get container status \"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495\": rpc error: code = NotFound desc = could not find container \"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495\": container with ID starting with d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495 not found: ID does not exist" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.887219 4881 scope.go:117] "RemoveContainer" containerID="bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f" Jan 21 13:16:47 crc kubenswrapper[4881]: E0121 13:16:47.888165 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f\": container with ID starting with bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f not found: ID does not exist" containerID="bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.888193 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f"} err="failed to get container status \"bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f\": rpc error: code = NotFound desc = could not find container \"bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f\": container with ID starting with bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f not found: ID does not exist" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.888212 4881 scope.go:117] "RemoveContainer" containerID="888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7" Jan 21 13:16:47 crc kubenswrapper[4881]: E0121 13:16:47.888505 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7\": container with ID starting with 888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7 not found: ID does not exist" containerID="888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.888549 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7"} err="failed to get container status \"888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7\": rpc error: code = NotFound desc = could not find container \"888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7\": container with ID starting with 888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7 not found: ID does not exist" Jan 21 13:16:49 crc kubenswrapper[4881]: I0121 13:16:49.334868 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" path="/var/lib/kubelet/pods/66383caa-595c-4dad-b9a9-a2878ef04277/volumes" Jan 21 13:16:50 crc kubenswrapper[4881]: I0121 13:16:50.310900 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:16:50 crc kubenswrapper[4881]: E0121 13:16:50.311281 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:17:01 crc kubenswrapper[4881]: I0121 13:17:01.319345 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:17:01 crc kubenswrapper[4881]: E0121 13:17:01.320391 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:17:16 crc kubenswrapper[4881]: I0121 13:17:16.311455 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:17:16 crc kubenswrapper[4881]: E0121 13:17:16.312278 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:17:31 crc kubenswrapper[4881]: I0121 13:17:31.312054 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:17:32 crc kubenswrapper[4881]: I0121 13:17:32.303258 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4"} Jan 21 13:19:59 crc kubenswrapper[4881]: I0121 13:19:59.851622 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:19:59 crc kubenswrapper[4881]: I0121 13:19:59.853500 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:20:29 crc kubenswrapper[4881]: I0121 13:20:29.851074 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:20:29 crc kubenswrapper[4881]: I0121 13:20:29.852842 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.446114 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:20:52 crc kubenswrapper[4881]: E0121 13:20:52.447445 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="registry-server" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.447479 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="registry-server" Jan 21 13:20:52 crc kubenswrapper[4881]: E0121 13:20:52.447512 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="extract-content" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.447520 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="extract-content" Jan 21 13:20:52 crc kubenswrapper[4881]: E0121 13:20:52.447541 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="extract-utilities" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.447550 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="extract-utilities" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.447833 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="registry-server" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.450022 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.476287 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.628448 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.628526 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5mcz\" (UniqueName: \"kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.628747 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.730931 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.731200 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.731236 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5mcz\" (UniqueName: \"kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.733001 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.733437 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.762540 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5mcz\" (UniqueName: \"kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.779924 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:53 crc kubenswrapper[4881]: I0121 13:20:53.750985 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:20:54 crc kubenswrapper[4881]: I0121 13:20:54.749460 4881 generic.go:334] "Generic (PLEG): container finished" podID="db19ebef-05c6-4b18-9143-641c362c472a" containerID="6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8" exitCode=0 Jan 21 13:20:54 crc kubenswrapper[4881]: I0121 13:20:54.749845 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerDied","Data":"6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8"} Jan 21 13:20:54 crc kubenswrapper[4881]: I0121 13:20:54.749908 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerStarted","Data":"ddf4df98d45221ed009798fb432f66248e0003d2feeb478daa19954df3572ec4"} Jan 21 13:20:54 crc kubenswrapper[4881]: I0121 13:20:54.752657 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:20:55 crc kubenswrapper[4881]: I0121 13:20:55.760936 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerStarted","Data":"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e"} Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.811250 4881 generic.go:334] "Generic (PLEG): container finished" podID="db19ebef-05c6-4b18-9143-641c362c472a" containerID="29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e" exitCode=0 Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.811345 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerDied","Data":"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e"} Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.851159 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.851252 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.851300 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.852179 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.852257 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4" gracePeriod=600 Jan 21 13:21:00 crc kubenswrapper[4881]: I0121 13:21:00.827831 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4" exitCode=0 Jan 21 13:21:00 crc kubenswrapper[4881]: I0121 13:21:00.827894 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4"} Jan 21 13:21:00 crc kubenswrapper[4881]: I0121 13:21:00.828341 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:21:01 crc kubenswrapper[4881]: I0121 13:21:01.844587 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4"} Jan 21 13:21:03 crc kubenswrapper[4881]: I0121 13:21:03.866116 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerStarted","Data":"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5"} Jan 21 13:21:03 crc kubenswrapper[4881]: I0121 13:21:03.911446 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w2b5n" podStartSLOduration=3.526557338 podStartE2EDuration="11.911400621s" podCreationTimestamp="2026-01-21 13:20:52 +0000 UTC" firstStartedPulling="2026-01-21 13:20:54.752240727 +0000 UTC m=+8642.012197196" lastFinishedPulling="2026-01-21 13:21:03.13708401 +0000 UTC m=+8650.397040479" observedRunningTime="2026-01-21 13:21:03.894176328 +0000 UTC m=+8651.154132797" watchObservedRunningTime="2026-01-21 13:21:03.911400621 +0000 UTC m=+8651.171357090" Jan 21 13:21:12 crc kubenswrapper[4881]: I0121 13:21:12.785317 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:12 crc kubenswrapper[4881]: I0121 13:21:12.786012 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:12 crc kubenswrapper[4881]: I0121 13:21:12.846041 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:13 crc kubenswrapper[4881]: I0121 13:21:13.018074 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:13 crc kubenswrapper[4881]: I0121 13:21:13.091233 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:21:14 crc kubenswrapper[4881]: I0121 13:21:14.987132 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w2b5n" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="registry-server" containerID="cri-o://18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5" gracePeriod=2 Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.504363 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.683302 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content\") pod \"db19ebef-05c6-4b18-9143-641c362c472a\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.683801 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities\") pod \"db19ebef-05c6-4b18-9143-641c362c472a\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.683875 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5mcz\" (UniqueName: \"kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz\") pod \"db19ebef-05c6-4b18-9143-641c362c472a\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.685158 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities" (OuterVolumeSpecName: "utilities") pod "db19ebef-05c6-4b18-9143-641c362c472a" (UID: "db19ebef-05c6-4b18-9143-641c362c472a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.692147 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz" (OuterVolumeSpecName: "kube-api-access-k5mcz") pod "db19ebef-05c6-4b18-9143-641c362c472a" (UID: "db19ebef-05c6-4b18-9143-641c362c472a"). InnerVolumeSpecName "kube-api-access-k5mcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.786260 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.786301 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5mcz\" (UniqueName: \"kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.812407 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db19ebef-05c6-4b18-9143-641c362c472a" (UID: "db19ebef-05c6-4b18-9143-641c362c472a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.889310 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.002748 4881 generic.go:334] "Generic (PLEG): container finished" podID="db19ebef-05c6-4b18-9143-641c362c472a" containerID="18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5" exitCode=0 Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.002818 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerDied","Data":"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5"} Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.002859 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerDied","Data":"ddf4df98d45221ed009798fb432f66248e0003d2feeb478daa19954df3572ec4"} Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.002884 4881 scope.go:117] "RemoveContainer" containerID="18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.003084 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.035859 4881 scope.go:117] "RemoveContainer" containerID="29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.069016 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.090218 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.090881 4881 scope.go:117] "RemoveContainer" containerID="6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.138832 4881 scope.go:117] "RemoveContainer" containerID="18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5" Jan 21 13:21:16 crc kubenswrapper[4881]: E0121 13:21:16.139512 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5\": container with ID starting with 18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5 not found: ID does not exist" containerID="18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.139582 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5"} err="failed to get container status \"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5\": rpc error: code = NotFound desc = could not find container \"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5\": container with ID starting with 18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5 not found: ID does not exist" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.139621 4881 scope.go:117] "RemoveContainer" containerID="29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e" Jan 21 13:21:16 crc kubenswrapper[4881]: E0121 13:21:16.140294 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e\": container with ID starting with 29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e not found: ID does not exist" containerID="29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.140347 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e"} err="failed to get container status \"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e\": rpc error: code = NotFound desc = could not find container \"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e\": container with ID starting with 29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e not found: ID does not exist" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.140385 4881 scope.go:117] "RemoveContainer" containerID="6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8" Jan 21 13:21:16 crc kubenswrapper[4881]: E0121 13:21:16.140760 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8\": container with ID starting with 6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8 not found: ID does not exist" containerID="6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.140824 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8"} err="failed to get container status \"6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8\": rpc error: code = NotFound desc = could not find container \"6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8\": container with ID starting with 6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8 not found: ID does not exist" Jan 21 13:21:17 crc kubenswrapper[4881]: I0121 13:21:17.325335 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db19ebef-05c6-4b18-9143-641c362c472a" path="/var/lib/kubelet/pods/db19ebef-05c6-4b18-9143-641c362c472a/volumes" Jan 21 13:21:36 crc kubenswrapper[4881]: I0121 13:21:36.188160 4881 scope.go:117] "RemoveContainer" containerID="b2480cdd412677da34ca1262943186b4f02a412993e268c2cc5a3c46d5441e61" Jan 21 13:21:36 crc kubenswrapper[4881]: I0121 13:21:36.232780 4881 scope.go:117] "RemoveContainer" containerID="7905ef1bd8eb4c2a74ecd66dee0f7a7d01738c48ab72e0bfb49efb8ba199940b" Jan 21 13:21:36 crc kubenswrapper[4881]: I0121 13:21:36.262535 4881 scope.go:117] "RemoveContainer" containerID="1f0cf2aba23d64564f86d3e47e178b26c66b88713e2c1b4e63ada03ff3001e47" Jan 21 13:21:54 crc kubenswrapper[4881]: E0121 13:21:54.351596 4881 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.042s" Jan 21 13:23:29 crc kubenswrapper[4881]: I0121 13:23:29.851378 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:23:29 crc kubenswrapper[4881]: I0121 13:23:29.852219 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.242922 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:23:47 crc kubenswrapper[4881]: E0121 13:23:47.245396 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="registry-server" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.245496 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="registry-server" Jan 21 13:23:47 crc kubenswrapper[4881]: E0121 13:23:47.245572 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="extract-utilities" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.245633 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="extract-utilities" Jan 21 13:23:47 crc kubenswrapper[4881]: E0121 13:23:47.245828 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="extract-content" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.245899 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="extract-content" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.247294 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="registry-server" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.251678 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.257615 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.378959 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpn2\" (UniqueName: \"kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.379440 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.379568 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.481934 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.482073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.482221 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpn2\" (UniqueName: \"kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.483342 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.484628 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.519431 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpn2\" (UniqueName: \"kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.611874 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:48 crc kubenswrapper[4881]: I0121 13:23:48.247370 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:23:48 crc kubenswrapper[4881]: I0121 13:23:48.876893 4881 generic.go:334] "Generic (PLEG): container finished" podID="e5931128-9209-474d-b0c0-430405aba54d" containerID="1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561" exitCode=0 Jan 21 13:23:48 crc kubenswrapper[4881]: I0121 13:23:48.876963 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerDied","Data":"1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561"} Jan 21 13:23:48 crc kubenswrapper[4881]: I0121 13:23:48.877007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerStarted","Data":"4fee55b896d0ecbf9818e45d47464bfc2bc9c8ad108315cfabe6f1907d2c198c"} Jan 21 13:23:50 crc kubenswrapper[4881]: I0121 13:23:50.898316 4881 generic.go:334] "Generic (PLEG): container finished" podID="e5931128-9209-474d-b0c0-430405aba54d" containerID="fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2" exitCode=0 Jan 21 13:23:50 crc kubenswrapper[4881]: I0121 13:23:50.899658 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerDied","Data":"fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2"} Jan 21 13:23:51 crc kubenswrapper[4881]: I0121 13:23:51.911616 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerStarted","Data":"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7"} Jan 21 13:23:51 crc kubenswrapper[4881]: I0121 13:23:51.944564 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vhjdq" podStartSLOduration=2.4966168460000002 podStartE2EDuration="4.944539616s" podCreationTimestamp="2026-01-21 13:23:47 +0000 UTC" firstStartedPulling="2026-01-21 13:23:48.879889269 +0000 UTC m=+8816.139845738" lastFinishedPulling="2026-01-21 13:23:51.327812039 +0000 UTC m=+8818.587768508" observedRunningTime="2026-01-21 13:23:51.936535522 +0000 UTC m=+8819.196492011" watchObservedRunningTime="2026-01-21 13:23:51.944539616 +0000 UTC m=+8819.204496085" Jan 21 13:23:57 crc kubenswrapper[4881]: I0121 13:23:57.620490 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:57 crc kubenswrapper[4881]: I0121 13:23:57.622008 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:57 crc kubenswrapper[4881]: I0121 13:23:57.701860 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:58 crc kubenswrapper[4881]: I0121 13:23:58.045269 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:58 crc kubenswrapper[4881]: I0121 13:23:58.115454 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:23:59 crc kubenswrapper[4881]: I0121 13:23:59.851257 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:23:59 crc kubenswrapper[4881]: I0121 13:23:59.851869 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.008699 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vhjdq" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="registry-server" containerID="cri-o://9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7" gracePeriod=2 Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.530780 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.592022 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content\") pod \"e5931128-9209-474d-b0c0-430405aba54d\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.592297 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftpn2\" (UniqueName: \"kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2\") pod \"e5931128-9209-474d-b0c0-430405aba54d\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.592347 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities\") pod \"e5931128-9209-474d-b0c0-430405aba54d\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.593200 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities" (OuterVolumeSpecName: "utilities") pod "e5931128-9209-474d-b0c0-430405aba54d" (UID: "e5931128-9209-474d-b0c0-430405aba54d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.598701 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2" (OuterVolumeSpecName: "kube-api-access-ftpn2") pod "e5931128-9209-474d-b0c0-430405aba54d" (UID: "e5931128-9209-474d-b0c0-430405aba54d"). InnerVolumeSpecName "kube-api-access-ftpn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.696237 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftpn2\" (UniqueName: \"kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.696283 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.967135 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5931128-9209-474d-b0c0-430405aba54d" (UID: "e5931128-9209-474d-b0c0-430405aba54d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.009190 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.026541 4881 generic.go:334] "Generic (PLEG): container finished" podID="e5931128-9209-474d-b0c0-430405aba54d" containerID="9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7" exitCode=0 Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.026603 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerDied","Data":"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7"} Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.026633 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerDied","Data":"4fee55b896d0ecbf9818e45d47464bfc2bc9c8ad108315cfabe6f1907d2c198c"} Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.026650 4881 scope.go:117] "RemoveContainer" containerID="9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.026946 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.072525 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.076993 4881 scope.go:117] "RemoveContainer" containerID="fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.086561 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.115331 4881 scope.go:117] "RemoveContainer" containerID="1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.171070 4881 scope.go:117] "RemoveContainer" containerID="9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7" Jan 21 13:24:01 crc kubenswrapper[4881]: E0121 13:24:01.171975 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7\": container with ID starting with 9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7 not found: ID does not exist" containerID="9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.172019 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7"} err="failed to get container status \"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7\": rpc error: code = NotFound desc = could not find container \"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7\": container with ID starting with 9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7 not found: ID does not exist" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.172047 4881 scope.go:117] "RemoveContainer" containerID="fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2" Jan 21 13:24:01 crc kubenswrapper[4881]: E0121 13:24:01.172531 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2\": container with ID starting with fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2 not found: ID does not exist" containerID="fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.172563 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2"} err="failed to get container status \"fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2\": rpc error: code = NotFound desc = could not find container \"fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2\": container with ID starting with fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2 not found: ID does not exist" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.172580 4881 scope.go:117] "RemoveContainer" containerID="1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561" Jan 21 13:24:01 crc kubenswrapper[4881]: E0121 13:24:01.173364 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561\": container with ID starting with 1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561 not found: ID does not exist" containerID="1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.173390 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561"} err="failed to get container status \"1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561\": rpc error: code = NotFound desc = could not find container \"1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561\": container with ID starting with 1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561 not found: ID does not exist" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.335343 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5931128-9209-474d-b0c0-430405aba54d" path="/var/lib/kubelet/pods/e5931128-9209-474d-b0c0-430405aba54d/volumes" Jan 21 13:24:29 crc kubenswrapper[4881]: I0121 13:24:29.851337 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:24:29 crc kubenswrapper[4881]: I0121 13:24:29.852043 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:24:29 crc kubenswrapper[4881]: I0121 13:24:29.852119 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:24:29 crc kubenswrapper[4881]: I0121 13:24:29.853249 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:24:29 crc kubenswrapper[4881]: I0121 13:24:29.853326 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" gracePeriod=600 Jan 21 13:24:29 crc kubenswrapper[4881]: E0121 13:24:29.979527 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:24:30 crc kubenswrapper[4881]: I0121 13:24:30.392478 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" exitCode=0 Jan 21 13:24:30 crc kubenswrapper[4881]: I0121 13:24:30.392739 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4"} Jan 21 13:24:30 crc kubenswrapper[4881]: I0121 13:24:30.393133 4881 scope.go:117] "RemoveContainer" containerID="3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4" Jan 21 13:24:30 crc kubenswrapper[4881]: I0121 13:24:30.394273 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:24:30 crc kubenswrapper[4881]: E0121 13:24:30.394956 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:24:43 crc kubenswrapper[4881]: I0121 13:24:43.328263 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:24:43 crc kubenswrapper[4881]: E0121 13:24:43.328930 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:24:57 crc kubenswrapper[4881]: I0121 13:24:57.311182 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:24:57 crc kubenswrapper[4881]: E0121 13:24:57.312101 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:25:09 crc kubenswrapper[4881]: I0121 13:25:09.312237 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:25:09 crc kubenswrapper[4881]: E0121 13:25:09.313603 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:25:23 crc kubenswrapper[4881]: I0121 13:25:23.327688 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:25:23 crc kubenswrapper[4881]: E0121 13:25:23.328567 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:25:35 crc kubenswrapper[4881]: I0121 13:25:35.312186 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:25:35 crc kubenswrapper[4881]: E0121 13:25:35.313413 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:25:48 crc kubenswrapper[4881]: I0121 13:25:48.310581 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:25:48 crc kubenswrapper[4881]: E0121 13:25:48.311902 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:26:03 crc kubenswrapper[4881]: I0121 13:26:03.317582 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:26:03 crc kubenswrapper[4881]: E0121 13:26:03.318313 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.797667 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:12 crc kubenswrapper[4881]: E0121 13:26:12.799097 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="registry-server" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.799125 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="registry-server" Jan 21 13:26:12 crc kubenswrapper[4881]: E0121 13:26:12.799169 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="extract-utilities" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.799180 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="extract-utilities" Jan 21 13:26:12 crc kubenswrapper[4881]: E0121 13:26:12.799238 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="extract-content" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.799251 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="extract-content" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.799585 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="registry-server" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.802030 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.811760 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.853942 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.854117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.854182 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.957052 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.957143 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.957176 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.958016 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.958077 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.992834 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:13 crc kubenswrapper[4881]: I0121 13:26:13.181496 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:13 crc kubenswrapper[4881]: I0121 13:26:13.768385 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:13 crc kubenswrapper[4881]: W0121 13:26:13.786489 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb6f50a9_e997_4629_bec7_5b36f8467213.slice/crio-135065841fcb0e210ebbee24ed1ceeaff870895357eed67f8b8b185d7ce2cb2f WatchSource:0}: Error finding container 135065841fcb0e210ebbee24ed1ceeaff870895357eed67f8b8b185d7ce2cb2f: Status 404 returned error can't find the container with id 135065841fcb0e210ebbee24ed1ceeaff870895357eed67f8b8b185d7ce2cb2f Jan 21 13:26:14 crc kubenswrapper[4881]: I0121 13:26:14.311317 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:26:14 crc kubenswrapper[4881]: E0121 13:26:14.311639 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:26:14 crc kubenswrapper[4881]: I0121 13:26:14.721817 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerID="e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113" exitCode=0 Jan 21 13:26:14 crc kubenswrapper[4881]: I0121 13:26:14.721915 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerDied","Data":"e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113"} Jan 21 13:26:14 crc kubenswrapper[4881]: I0121 13:26:14.722165 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerStarted","Data":"135065841fcb0e210ebbee24ed1ceeaff870895357eed67f8b8b185d7ce2cb2f"} Jan 21 13:26:14 crc kubenswrapper[4881]: I0121 13:26:14.726573 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:26:16 crc kubenswrapper[4881]: I0121 13:26:16.748293 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerID="b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789" exitCode=0 Jan 21 13:26:16 crc kubenswrapper[4881]: I0121 13:26:16.748386 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerDied","Data":"b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789"} Jan 21 13:26:17 crc kubenswrapper[4881]: I0121 13:26:17.766896 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerStarted","Data":"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41"} Jan 21 13:26:17 crc kubenswrapper[4881]: I0121 13:26:17.807122 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vb5m2" podStartSLOduration=3.34028762 podStartE2EDuration="5.80707773s" podCreationTimestamp="2026-01-21 13:26:12 +0000 UTC" firstStartedPulling="2026-01-21 13:26:14.726321359 +0000 UTC m=+8961.986277828" lastFinishedPulling="2026-01-21 13:26:17.193111459 +0000 UTC m=+8964.453067938" observedRunningTime="2026-01-21 13:26:17.794132484 +0000 UTC m=+8965.054088973" watchObservedRunningTime="2026-01-21 13:26:17.80707773 +0000 UTC m=+8965.067034209" Jan 21 13:26:23 crc kubenswrapper[4881]: I0121 13:26:23.182691 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:23 crc kubenswrapper[4881]: I0121 13:26:23.184344 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:23 crc kubenswrapper[4881]: I0121 13:26:23.229338 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:23 crc kubenswrapper[4881]: I0121 13:26:23.536396 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:23 crc kubenswrapper[4881]: I0121 13:26:23.586378 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:25 crc kubenswrapper[4881]: I0121 13:26:25.476565 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vb5m2" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="registry-server" containerID="cri-o://7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41" gracePeriod=2 Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.257577 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.360816 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz\") pod \"bb6f50a9-e997-4629-bec7-5b36f8467213\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.360936 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content\") pod \"bb6f50a9-e997-4629-bec7-5b36f8467213\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.361060 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities\") pod \"bb6f50a9-e997-4629-bec7-5b36f8467213\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.362019 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities" (OuterVolumeSpecName: "utilities") pod "bb6f50a9-e997-4629-bec7-5b36f8467213" (UID: "bb6f50a9-e997-4629-bec7-5b36f8467213"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.373086 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz" (OuterVolumeSpecName: "kube-api-access-b86nz") pod "bb6f50a9-e997-4629-bec7-5b36f8467213" (UID: "bb6f50a9-e997-4629-bec7-5b36f8467213"). InnerVolumeSpecName "kube-api-access-b86nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.391088 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb6f50a9-e997-4629-bec7-5b36f8467213" (UID: "bb6f50a9-e997-4629-bec7-5b36f8467213"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.465109 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.465379 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.465394 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.490059 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerID="7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41" exitCode=0 Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.490097 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.490102 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerDied","Data":"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41"} Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.490132 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerDied","Data":"135065841fcb0e210ebbee24ed1ceeaff870895357eed67f8b8b185d7ce2cb2f"} Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.490154 4881 scope.go:117] "RemoveContainer" containerID="7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.529950 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.535931 4881 scope.go:117] "RemoveContainer" containerID="b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.541028 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.563472 4881 scope.go:117] "RemoveContainer" containerID="e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.622597 4881 scope.go:117] "RemoveContainer" containerID="7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41" Jan 21 13:26:26 crc kubenswrapper[4881]: E0121 13:26:26.624029 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41\": container with ID starting with 7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41 not found: ID does not exist" containerID="7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.624074 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41"} err="failed to get container status \"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41\": rpc error: code = NotFound desc = could not find container \"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41\": container with ID starting with 7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41 not found: ID does not exist" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.624103 4881 scope.go:117] "RemoveContainer" containerID="b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789" Jan 21 13:26:26 crc kubenswrapper[4881]: E0121 13:26:26.624690 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789\": container with ID starting with b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789 not found: ID does not exist" containerID="b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.624720 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789"} err="failed to get container status \"b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789\": rpc error: code = NotFound desc = could not find container \"b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789\": container with ID starting with b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789 not found: ID does not exist" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.624741 4881 scope.go:117] "RemoveContainer" containerID="e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113" Jan 21 13:26:26 crc kubenswrapper[4881]: E0121 13:26:26.625113 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113\": container with ID starting with e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113 not found: ID does not exist" containerID="e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.625167 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113"} err="failed to get container status \"e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113\": rpc error: code = NotFound desc = could not find container \"e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113\": container with ID starting with e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113 not found: ID does not exist" Jan 21 13:26:27 crc kubenswrapper[4881]: I0121 13:26:27.335696 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" path="/var/lib/kubelet/pods/bb6f50a9-e997-4629-bec7-5b36f8467213/volumes" Jan 21 13:26:29 crc kubenswrapper[4881]: I0121 13:26:29.312024 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:26:29 crc kubenswrapper[4881]: E0121 13:26:29.313165 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:26:43 crc kubenswrapper[4881]: I0121 13:26:43.323332 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:26:43 crc kubenswrapper[4881]: E0121 13:26:43.324183 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:26:56 crc kubenswrapper[4881]: I0121 13:26:56.311274 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:26:56 crc kubenswrapper[4881]: E0121 13:26:56.312198 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.001496 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:03 crc kubenswrapper[4881]: E0121 13:27:03.002900 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="registry-server" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.002926 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="registry-server" Jan 21 13:27:03 crc kubenswrapper[4881]: E0121 13:27:03.002949 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="extract-utilities" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.002961 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="extract-utilities" Jan 21 13:27:03 crc kubenswrapper[4881]: E0121 13:27:03.003077 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="extract-content" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.003092 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="extract-content" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.003419 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="registry-server" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.006162 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.037962 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.153567 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.153641 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.153723 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmq6z\" (UniqueName: \"kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.256277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmq6z\" (UniqueName: \"kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.256567 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.256610 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.258510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.258548 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.286459 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmq6z\" (UniqueName: \"kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.331077 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:04 crc kubenswrapper[4881]: I0121 13:27:04.197435 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:05 crc kubenswrapper[4881]: I0121 13:27:05.218026 4881 generic.go:334] "Generic (PLEG): container finished" podID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerID="4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08" exitCode=0 Jan 21 13:27:05 crc kubenswrapper[4881]: I0121 13:27:05.218142 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerDied","Data":"4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08"} Jan 21 13:27:05 crc kubenswrapper[4881]: I0121 13:27:05.218487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerStarted","Data":"e7c213d8ebb50ed1685eb2246532fa5ff812d040f27b3f2e8c8f1a768c916445"} Jan 21 13:27:07 crc kubenswrapper[4881]: I0121 13:27:07.248109 4881 generic.go:334] "Generic (PLEG): container finished" podID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerID="bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca" exitCode=0 Jan 21 13:27:07 crc kubenswrapper[4881]: I0121 13:27:07.248353 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerDied","Data":"bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca"} Jan 21 13:27:07 crc kubenswrapper[4881]: I0121 13:27:07.310641 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:27:07 crc kubenswrapper[4881]: E0121 13:27:07.310943 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:27:08 crc kubenswrapper[4881]: I0121 13:27:08.262798 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerStarted","Data":"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c"} Jan 21 13:27:08 crc kubenswrapper[4881]: I0121 13:27:08.282041 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8jbx6" podStartSLOduration=3.686252389 podStartE2EDuration="6.282014001s" podCreationTimestamp="2026-01-21 13:27:02 +0000 UTC" firstStartedPulling="2026-01-21 13:27:05.220624153 +0000 UTC m=+9012.480580632" lastFinishedPulling="2026-01-21 13:27:07.816385775 +0000 UTC m=+9015.076342244" observedRunningTime="2026-01-21 13:27:08.281282674 +0000 UTC m=+9015.541239153" watchObservedRunningTime="2026-01-21 13:27:08.282014001 +0000 UTC m=+9015.541970470" Jan 21 13:27:13 crc kubenswrapper[4881]: I0121 13:27:13.331338 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:13 crc kubenswrapper[4881]: I0121 13:27:13.331863 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:13 crc kubenswrapper[4881]: I0121 13:27:13.391421 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:14 crc kubenswrapper[4881]: I0121 13:27:14.516605 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:14 crc kubenswrapper[4881]: I0121 13:27:14.574397 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:16 crc kubenswrapper[4881]: I0121 13:27:16.473239 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8jbx6" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="registry-server" containerID="cri-o://c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c" gracePeriod=2 Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.139910 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.255128 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmq6z\" (UniqueName: \"kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z\") pod \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.255268 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content\") pod \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.255302 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities\") pod \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.256732 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities" (OuterVolumeSpecName: "utilities") pod "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" (UID: "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.269005 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z" (OuterVolumeSpecName: "kube-api-access-gmq6z") pod "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" (UID: "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49"). InnerVolumeSpecName "kube-api-access-gmq6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.303498 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" (UID: "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.359837 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmq6z\" (UniqueName: \"kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.360104 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.360115 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.493577 4881 generic.go:334] "Generic (PLEG): container finished" podID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerID="c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c" exitCode=0 Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.493624 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerDied","Data":"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c"} Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.493653 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerDied","Data":"e7c213d8ebb50ed1685eb2246532fa5ff812d040f27b3f2e8c8f1a768c916445"} Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.493659 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.493671 4881 scope.go:117] "RemoveContainer" containerID="c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.514492 4881 scope.go:117] "RemoveContainer" containerID="bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.548267 4881 scope.go:117] "RemoveContainer" containerID="4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.560142 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.575568 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.612388 4881 scope.go:117] "RemoveContainer" containerID="c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c" Jan 21 13:27:18 crc kubenswrapper[4881]: E0121 13:27:18.612946 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c\": container with ID starting with c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c not found: ID does not exist" containerID="c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.612983 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c"} err="failed to get container status \"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c\": rpc error: code = NotFound desc = could not find container \"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c\": container with ID starting with c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c not found: ID does not exist" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.613125 4881 scope.go:117] "RemoveContainer" containerID="bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca" Jan 21 13:27:18 crc kubenswrapper[4881]: E0121 13:27:18.613551 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca\": container with ID starting with bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca not found: ID does not exist" containerID="bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.613589 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca"} err="failed to get container status \"bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca\": rpc error: code = NotFound desc = could not find container \"bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca\": container with ID starting with bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca not found: ID does not exist" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.613610 4881 scope.go:117] "RemoveContainer" containerID="4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08" Jan 21 13:27:18 crc kubenswrapper[4881]: E0121 13:27:18.613998 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08\": container with ID starting with 4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08 not found: ID does not exist" containerID="4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.614018 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08"} err="failed to get container status \"4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08\": rpc error: code = NotFound desc = could not find container \"4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08\": container with ID starting with 4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08 not found: ID does not exist" Jan 21 13:27:19 crc kubenswrapper[4881]: I0121 13:27:19.329749 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" path="/var/lib/kubelet/pods/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49/volumes" Jan 21 13:27:22 crc kubenswrapper[4881]: I0121 13:27:22.312229 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:27:22 crc kubenswrapper[4881]: E0121 13:27:22.313054 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:27:33 crc kubenswrapper[4881]: I0121 13:27:33.319301 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:27:33 crc kubenswrapper[4881]: E0121 13:27:33.320097 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:27:48 crc kubenswrapper[4881]: I0121 13:27:48.312026 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:27:48 crc kubenswrapper[4881]: E0121 13:27:48.313218 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:28:02 crc kubenswrapper[4881]: I0121 13:28:02.311297 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:28:02 crc kubenswrapper[4881]: E0121 13:28:02.312205 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:28:13 crc kubenswrapper[4881]: I0121 13:28:13.324759 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:28:13 crc kubenswrapper[4881]: E0121 13:28:13.325775 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:28:25 crc kubenswrapper[4881]: I0121 13:28:25.311820 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:28:25 crc kubenswrapper[4881]: E0121 13:28:25.312971 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:28:36 crc kubenswrapper[4881]: I0121 13:28:36.312442 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:28:36 crc kubenswrapper[4881]: E0121 13:28:36.313692 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:28:51 crc kubenswrapper[4881]: I0121 13:28:51.315554 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:28:51 crc kubenswrapper[4881]: E0121 13:28:51.317081 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:29:04 crc kubenswrapper[4881]: I0121 13:29:04.310619 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:29:04 crc kubenswrapper[4881]: E0121 13:29:04.311718 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:29:18 crc kubenswrapper[4881]: I0121 13:29:18.311455 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:29:18 crc kubenswrapper[4881]: E0121 13:29:18.312344 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:29:29 crc kubenswrapper[4881]: I0121 13:29:29.315934 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:29:29 crc kubenswrapper[4881]: E0121 13:29:29.316829 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:29:44 crc kubenswrapper[4881]: I0121 13:29:44.312087 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:29:44 crc kubenswrapper[4881]: I0121 13:29:44.689361 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826"} Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.159588 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t"] Jan 21 13:30:00 crc kubenswrapper[4881]: E0121 13:30:00.160667 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="registry-server" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.160693 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="registry-server" Jan 21 13:30:00 crc kubenswrapper[4881]: E0121 13:30:00.160712 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="extract-content" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.160717 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="extract-content" Jan 21 13:30:00 crc kubenswrapper[4881]: E0121 13:30:00.160740 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="extract-utilities" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.160746 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="extract-utilities" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.161062 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="registry-server" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.161947 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.164535 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.176911 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.188551 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t"] Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.291217 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.291284 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.291313 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrnb\" (UniqueName: \"kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.394146 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.394244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.394271 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrnb\" (UniqueName: \"kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.395470 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.411847 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.419333 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrnb\" (UniqueName: \"kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.489512 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.969841 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t"] Jan 21 13:30:00 crc kubenswrapper[4881]: W0121 13:30:00.974248 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66b8a832_c205_40a4_9a2f_e70e2f246734.slice/crio-76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374 WatchSource:0}: Error finding container 76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374: Status 404 returned error can't find the container with id 76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374 Jan 21 13:30:01 crc kubenswrapper[4881]: I0121 13:30:01.883320 4881 generic.go:334] "Generic (PLEG): container finished" podID="66b8a832-c205-40a4-9a2f-e70e2f246734" containerID="0ba5d71f6335983529a141b9ebebd16f047678d591839e108c1dd405896d81e3" exitCode=0 Jan 21 13:30:01 crc kubenswrapper[4881]: I0121 13:30:01.883491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" event={"ID":"66b8a832-c205-40a4-9a2f-e70e2f246734","Type":"ContainerDied","Data":"0ba5d71f6335983529a141b9ebebd16f047678d591839e108c1dd405896d81e3"} Jan 21 13:30:01 crc kubenswrapper[4881]: I0121 13:30:01.883675 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" event={"ID":"66b8a832-c205-40a4-9a2f-e70e2f246734","Type":"ContainerStarted","Data":"76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374"} Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.307349 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.467637 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjrnb\" (UniqueName: \"kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb\") pod \"66b8a832-c205-40a4-9a2f-e70e2f246734\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.468234 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume\") pod \"66b8a832-c205-40a4-9a2f-e70e2f246734\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.468611 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume\") pod \"66b8a832-c205-40a4-9a2f-e70e2f246734\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.472563 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume" (OuterVolumeSpecName: "config-volume") pod "66b8a832-c205-40a4-9a2f-e70e2f246734" (UID: "66b8a832-c205-40a4-9a2f-e70e2f246734"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.475751 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb" (OuterVolumeSpecName: "kube-api-access-bjrnb") pod "66b8a832-c205-40a4-9a2f-e70e2f246734" (UID: "66b8a832-c205-40a4-9a2f-e70e2f246734"). InnerVolumeSpecName "kube-api-access-bjrnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.486612 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66b8a832-c205-40a4-9a2f-e70e2f246734" (UID: "66b8a832-c205-40a4-9a2f-e70e2f246734"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.571910 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.571964 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjrnb\" (UniqueName: \"kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.571977 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.924442 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" event={"ID":"66b8a832-c205-40a4-9a2f-e70e2f246734","Type":"ContainerDied","Data":"76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374"} Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.924915 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.925118 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:04 crc kubenswrapper[4881]: I0121 13:30:04.403174 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8"] Jan 21 13:30:04 crc kubenswrapper[4881]: I0121 13:30:04.411164 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8"] Jan 21 13:30:05 crc kubenswrapper[4881]: I0121 13:30:05.326498 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" path="/var/lib/kubelet/pods/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7/volumes" Jan 21 13:30:36 crc kubenswrapper[4881]: I0121 13:30:36.646112 4881 scope.go:117] "RemoveContainer" containerID="77513d54cf4d9f5496abf1ce9933fa0d7aa3da0530b4c165a7c1ed70ba94b89c" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.414192 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:31:48 crc kubenswrapper[4881]: E0121 13:31:48.415274 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b8a832-c205-40a4-9a2f-e70e2f246734" containerName="collect-profiles" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.415291 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b8a832-c205-40a4-9a2f-e70e2f246734" containerName="collect-profiles" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.415564 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b8a832-c205-40a4-9a2f-e70e2f246734" containerName="collect-profiles" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.421887 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.431501 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.549208 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.549292 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.549602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clkkc\" (UniqueName: \"kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.652755 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.652854 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.652915 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clkkc\" (UniqueName: \"kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.653457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.653494 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.676329 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clkkc\" (UniqueName: \"kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.769987 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:49 crc kubenswrapper[4881]: I0121 13:31:49.283894 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:31:49 crc kubenswrapper[4881]: I0121 13:31:49.500116 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerStarted","Data":"2df816c5e6752dbeb71a7b9bbfa33d75710ad8f517cdada1359b9256fa202c34"} Jan 21 13:31:50 crc kubenswrapper[4881]: I0121 13:31:50.512996 4881 generic.go:334] "Generic (PLEG): container finished" podID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerID="268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e" exitCode=0 Jan 21 13:31:50 crc kubenswrapper[4881]: I0121 13:31:50.513119 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerDied","Data":"268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e"} Jan 21 13:31:50 crc kubenswrapper[4881]: I0121 13:31:50.515538 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:31:52 crc kubenswrapper[4881]: I0121 13:31:52.535589 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerStarted","Data":"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3"} Jan 21 13:31:56 crc kubenswrapper[4881]: I0121 13:31:56.602534 4881 generic.go:334] "Generic (PLEG): container finished" podID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerID="645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3" exitCode=0 Jan 21 13:31:56 crc kubenswrapper[4881]: I0121 13:31:56.602638 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerDied","Data":"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3"} Jan 21 13:31:57 crc kubenswrapper[4881]: I0121 13:31:57.614299 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerStarted","Data":"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8"} Jan 21 13:31:57 crc kubenswrapper[4881]: I0121 13:31:57.646772 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ft2l4" podStartSLOduration=2.927036289 podStartE2EDuration="9.646720409s" podCreationTimestamp="2026-01-21 13:31:48 +0000 UTC" firstStartedPulling="2026-01-21 13:31:50.515215159 +0000 UTC m=+9297.775171618" lastFinishedPulling="2026-01-21 13:31:57.234899259 +0000 UTC m=+9304.494855738" observedRunningTime="2026-01-21 13:31:57.635122286 +0000 UTC m=+9304.895078765" watchObservedRunningTime="2026-01-21 13:31:57.646720409 +0000 UTC m=+9304.906676878" Jan 21 13:31:58 crc kubenswrapper[4881]: I0121 13:31:58.770768 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:58 crc kubenswrapper[4881]: I0121 13:31:58.771187 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:59 crc kubenswrapper[4881]: I0121 13:31:59.828023 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ft2l4" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="registry-server" probeResult="failure" output=< Jan 21 13:31:59 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 13:31:59 crc kubenswrapper[4881]: > Jan 21 13:32:00 crc kubenswrapper[4881]: I0121 13:31:59.851475 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:32:00 crc kubenswrapper[4881]: I0121 13:31:59.851574 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:32:08 crc kubenswrapper[4881]: I0121 13:32:08.854106 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:32:08 crc kubenswrapper[4881]: I0121 13:32:08.917068 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:32:09 crc kubenswrapper[4881]: I0121 13:32:09.110919 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:32:10 crc kubenswrapper[4881]: I0121 13:32:10.766345 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ft2l4" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="registry-server" containerID="cri-o://99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8" gracePeriod=2 Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.377427 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.560136 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content\") pod \"c759a886-be2c-47df-a1d7-1208d82c2f59\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.560282 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clkkc\" (UniqueName: \"kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc\") pod \"c759a886-be2c-47df-a1d7-1208d82c2f59\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.560337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities\") pod \"c759a886-be2c-47df-a1d7-1208d82c2f59\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.561659 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities" (OuterVolumeSpecName: "utilities") pod "c759a886-be2c-47df-a1d7-1208d82c2f59" (UID: "c759a886-be2c-47df-a1d7-1208d82c2f59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.576839 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc" (OuterVolumeSpecName: "kube-api-access-clkkc") pod "c759a886-be2c-47df-a1d7-1208d82c2f59" (UID: "c759a886-be2c-47df-a1d7-1208d82c2f59"). InnerVolumeSpecName "kube-api-access-clkkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.663419 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clkkc\" (UniqueName: \"kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.663466 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.714569 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c759a886-be2c-47df-a1d7-1208d82c2f59" (UID: "c759a886-be2c-47df-a1d7-1208d82c2f59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.765588 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.779963 4881 generic.go:334] "Generic (PLEG): container finished" podID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerID="99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8" exitCode=0 Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.780017 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerDied","Data":"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8"} Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.780059 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerDied","Data":"2df816c5e6752dbeb71a7b9bbfa33d75710ad8f517cdada1359b9256fa202c34"} Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.780081 4881 scope.go:117] "RemoveContainer" containerID="99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.780262 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.800325 4881 scope.go:117] "RemoveContainer" containerID="645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.830008 4881 scope.go:117] "RemoveContainer" containerID="268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.833416 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.841610 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.880712 4881 scope.go:117] "RemoveContainer" containerID="99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8" Jan 21 13:32:11 crc kubenswrapper[4881]: E0121 13:32:11.883388 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8\": container with ID starting with 99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8 not found: ID does not exist" containerID="99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.883437 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8"} err="failed to get container status \"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8\": rpc error: code = NotFound desc = could not find container \"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8\": container with ID starting with 99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8 not found: ID does not exist" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.883469 4881 scope.go:117] "RemoveContainer" containerID="645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3" Jan 21 13:32:11 crc kubenswrapper[4881]: E0121 13:32:11.883850 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3\": container with ID starting with 645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3 not found: ID does not exist" containerID="645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.883872 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3"} err="failed to get container status \"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3\": rpc error: code = NotFound desc = could not find container \"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3\": container with ID starting with 645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3 not found: ID does not exist" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.883886 4881 scope.go:117] "RemoveContainer" containerID="268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e" Jan 21 13:32:11 crc kubenswrapper[4881]: E0121 13:32:11.884114 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e\": container with ID starting with 268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e not found: ID does not exist" containerID="268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.884139 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e"} err="failed to get container status \"268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e\": rpc error: code = NotFound desc = could not find container \"268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e\": container with ID starting with 268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e not found: ID does not exist" Jan 21 13:32:13 crc kubenswrapper[4881]: I0121 13:32:13.335814 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" path="/var/lib/kubelet/pods/c759a886-be2c-47df-a1d7-1208d82c2f59/volumes" Jan 21 13:32:29 crc kubenswrapper[4881]: I0121 13:32:29.850887 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:32:29 crc kubenswrapper[4881]: I0121 13:32:29.851481 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:32:59 crc kubenswrapper[4881]: I0121 13:32:59.850885 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:32:59 crc kubenswrapper[4881]: I0121 13:32:59.851346 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:32:59 crc kubenswrapper[4881]: I0121 13:32:59.851404 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:32:59 crc kubenswrapper[4881]: I0121 13:32:59.852449 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:32:59 crc kubenswrapper[4881]: I0121 13:32:59.852514 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826" gracePeriod=600 Jan 21 13:33:00 crc kubenswrapper[4881]: I0121 13:33:00.468943 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826" exitCode=0 Jan 21 13:33:00 crc kubenswrapper[4881]: I0121 13:33:00.469036 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826"} Jan 21 13:33:00 crc kubenswrapper[4881]: I0121 13:33:00.469318 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:33:01 crc kubenswrapper[4881]: I0121 13:33:01.483874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85"} Jan 21 13:35:29 crc kubenswrapper[4881]: I0121 13:35:29.851316 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:35:29 crc kubenswrapper[4881]: I0121 13:35:29.851953 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:35:59 crc kubenswrapper[4881]: I0121 13:35:59.851526 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:35:59 crc kubenswrapper[4881]: I0121 13:35:59.852179 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:36:29 crc kubenswrapper[4881]: I0121 13:36:29.851409 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:36:29 crc kubenswrapper[4881]: I0121 13:36:29.852023 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:36:29 crc kubenswrapper[4881]: I0121 13:36:29.852076 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:36:29 crc kubenswrapper[4881]: I0121 13:36:29.852985 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:36:29 crc kubenswrapper[4881]: I0121 13:36:29.853041 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" gracePeriod=600 Jan 21 13:36:29 crc kubenswrapper[4881]: E0121 13:36:29.990286 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:36:30 crc kubenswrapper[4881]: I0121 13:36:30.192299 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" exitCode=0 Jan 21 13:36:30 crc kubenswrapper[4881]: I0121 13:36:30.192357 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85"} Jan 21 13:36:30 crc kubenswrapper[4881]: I0121 13:36:30.192399 4881 scope.go:117] "RemoveContainer" containerID="9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826" Jan 21 13:36:30 crc kubenswrapper[4881]: I0121 13:36:30.193746 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:36:30 crc kubenswrapper[4881]: E0121 13:36:30.194611 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:36:44 crc kubenswrapper[4881]: I0121 13:36:44.311095 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:36:44 crc kubenswrapper[4881]: E0121 13:36:44.311953 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:36:57 crc kubenswrapper[4881]: I0121 13:36:57.311501 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:36:57 crc kubenswrapper[4881]: E0121 13:36:57.312495 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:37:12 crc kubenswrapper[4881]: I0121 13:37:12.311557 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:37:12 crc kubenswrapper[4881]: E0121 13:37:12.312907 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:37:26 crc kubenswrapper[4881]: I0121 13:37:26.310849 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:37:26 crc kubenswrapper[4881]: E0121 13:37:26.311875 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:37:39 crc kubenswrapper[4881]: I0121 13:37:39.311234 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:37:39 crc kubenswrapper[4881]: E0121 13:37:39.312268 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:37:53 crc kubenswrapper[4881]: I0121 13:37:53.323739 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:37:53 crc kubenswrapper[4881]: E0121 13:37:53.324545 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.018050 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:37:57 crc kubenswrapper[4881]: E0121 13:37:57.019131 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="extract-content" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.019148 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="extract-content" Jan 21 13:37:57 crc kubenswrapper[4881]: E0121 13:37:57.019193 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="extract-utilities" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.019202 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="extract-utilities" Jan 21 13:37:57 crc kubenswrapper[4881]: E0121 13:37:57.019223 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="registry-server" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.019230 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="registry-server" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.019527 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="registry-server" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.021561 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.027927 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.034211 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.034395 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.034542 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b4dz\" (UniqueName: \"kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.135636 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b4dz\" (UniqueName: \"kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.135721 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.135860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.136406 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.136436 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.163040 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b4dz\" (UniqueName: \"kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.209734 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.211971 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.220726 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.238983 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.239404 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcnx5\" (UniqueName: \"kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.239447 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.340745 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcnx5\" (UniqueName: \"kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.340810 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.341372 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.340903 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.341933 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.343073 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.357757 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcnx5\" (UniqueName: \"kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.556115 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:57.954847 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:37:58 crc kubenswrapper[4881]: W0121 13:37:57.980595 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded14e1b3_9440_4f92_a793_683eb01e4401.slice/crio-3128a5db6427ef859cbb254cfb64ea3e3fe6d1a8d86c2c240331ac40ce10660b WatchSource:0}: Error finding container 3128a5db6427ef859cbb254cfb64ea3e3fe6d1a8d86c2c240331ac40ce10660b: Status 404 returned error can't find the container with id 3128a5db6427ef859cbb254cfb64ea3e3fe6d1a8d86c2c240331ac40ce10660b Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:58.399856 4881 generic.go:334] "Generic (PLEG): container finished" podID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerID="a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100" exitCode=0 Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:58.400145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerDied","Data":"a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100"} Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:58.400172 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerStarted","Data":"3128a5db6427ef859cbb254cfb64ea3e3fe6d1a8d86c2c240331ac40ce10660b"} Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:58.403460 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:58.941311 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.412691 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerStarted","Data":"17ba08f13a57d780ffed935060230f2347bf7739e7e17de9f0c3d10f0e502757"} Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.616477 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.619466 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.679674 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.702042 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.702101 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.702251 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rrct\" (UniqueName: \"kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.804251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.804609 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.804754 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rrct\" (UniqueName: \"kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.805335 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.805390 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.830655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rrct\" (UniqueName: \"kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:00 crc kubenswrapper[4881]: I0121 13:38:00.019801 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:00 crc kubenswrapper[4881]: I0121 13:38:00.440231 4881 generic.go:334] "Generic (PLEG): container finished" podID="03907694-a0e6-40d6-8142-9f20169ffe16" containerID="2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d" exitCode=0 Jan 21 13:38:00 crc kubenswrapper[4881]: I0121 13:38:00.440708 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerDied","Data":"2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d"} Jan 21 13:38:00 crc kubenswrapper[4881]: I0121 13:38:00.456373 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerStarted","Data":"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22"} Jan 21 13:38:00 crc kubenswrapper[4881]: I0121 13:38:00.864174 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:38:00 crc kubenswrapper[4881]: W0121 13:38:00.866150 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf33fd22_6287_45a0_a95d_52c731fdda8d.slice/crio-7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a WatchSource:0}: Error finding container 7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a: Status 404 returned error can't find the container with id 7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a Jan 21 13:38:01 crc kubenswrapper[4881]: I0121 13:38:01.479154 4881 generic.go:334] "Generic (PLEG): container finished" podID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerID="c8b5a836281ab5b467d91cb111b8bde5e2a3b2341cf2889f854337a51110a7f2" exitCode=0 Jan 21 13:38:01 crc kubenswrapper[4881]: I0121 13:38:01.479494 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerDied","Data":"c8b5a836281ab5b467d91cb111b8bde5e2a3b2341cf2889f854337a51110a7f2"} Jan 21 13:38:01 crc kubenswrapper[4881]: I0121 13:38:01.479988 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerStarted","Data":"7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a"} Jan 21 13:38:01 crc kubenswrapper[4881]: I0121 13:38:01.486456 4881 generic.go:334] "Generic (PLEG): container finished" podID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerID="84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22" exitCode=0 Jan 21 13:38:01 crc kubenswrapper[4881]: I0121 13:38:01.486508 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerDied","Data":"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22"} Jan 21 13:38:02 crc kubenswrapper[4881]: I0121 13:38:02.502799 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerStarted","Data":"1464e8dac96b23af6bad563afba50c099ee6ffdb3c7eb1c93e0ab2b66618e523"} Jan 21 13:38:02 crc kubenswrapper[4881]: I0121 13:38:02.511391 4881 generic.go:334] "Generic (PLEG): container finished" podID="03907694-a0e6-40d6-8142-9f20169ffe16" containerID="b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8" exitCode=0 Jan 21 13:38:02 crc kubenswrapper[4881]: I0121 13:38:02.511492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerDied","Data":"b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8"} Jan 21 13:38:02 crc kubenswrapper[4881]: I0121 13:38:02.518021 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerStarted","Data":"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734"} Jan 21 13:38:02 crc kubenswrapper[4881]: I0121 13:38:02.561255 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s88gg" podStartSLOduration=3.05382083 podStartE2EDuration="6.561222184s" podCreationTimestamp="2026-01-21 13:37:56 +0000 UTC" firstStartedPulling="2026-01-21 13:37:58.403104017 +0000 UTC m=+9665.663060486" lastFinishedPulling="2026-01-21 13:38:01.910505361 +0000 UTC m=+9669.170461840" observedRunningTime="2026-01-21 13:38:02.551699132 +0000 UTC m=+9669.811655601" watchObservedRunningTime="2026-01-21 13:38:02.561222184 +0000 UTC m=+9669.821178653" Jan 21 13:38:05 crc kubenswrapper[4881]: I0121 13:38:05.761252 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerStarted","Data":"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6"} Jan 21 13:38:05 crc kubenswrapper[4881]: I0121 13:38:05.766657 4881 generic.go:334] "Generic (PLEG): container finished" podID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerID="1464e8dac96b23af6bad563afba50c099ee6ffdb3c7eb1c93e0ab2b66618e523" exitCode=0 Jan 21 13:38:05 crc kubenswrapper[4881]: I0121 13:38:05.766715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerDied","Data":"1464e8dac96b23af6bad563afba50c099ee6ffdb3c7eb1c93e0ab2b66618e523"} Jan 21 13:38:05 crc kubenswrapper[4881]: I0121 13:38:05.789555 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lr68z" podStartSLOduration=5.393748249 podStartE2EDuration="8.789532068s" podCreationTimestamp="2026-01-21 13:37:57 +0000 UTC" firstStartedPulling="2026-01-21 13:38:00.448530092 +0000 UTC m=+9667.708486561" lastFinishedPulling="2026-01-21 13:38:03.844313911 +0000 UTC m=+9671.104270380" observedRunningTime="2026-01-21 13:38:05.785233463 +0000 UTC m=+9673.045189962" watchObservedRunningTime="2026-01-21 13:38:05.789532068 +0000 UTC m=+9673.049488537" Jan 21 13:38:06 crc kubenswrapper[4881]: I0121 13:38:06.798544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerStarted","Data":"005380c69dc02bb03b813c5b9b36612ee450bef0fa7fc34d08e62eb7b603f7e6"} Jan 21 13:38:06 crc kubenswrapper[4881]: I0121 13:38:06.820451 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zw7gl" podStartSLOduration=2.8995577949999998 podStartE2EDuration="7.82042906s" podCreationTimestamp="2026-01-21 13:37:59 +0000 UTC" firstStartedPulling="2026-01-21 13:38:01.482289193 +0000 UTC m=+9668.742245672" lastFinishedPulling="2026-01-21 13:38:06.403160468 +0000 UTC m=+9673.663116937" observedRunningTime="2026-01-21 13:38:06.81718012 +0000 UTC m=+9674.077136599" watchObservedRunningTime="2026-01-21 13:38:06.82042906 +0000 UTC m=+9674.080385529" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.311131 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:38:07 crc kubenswrapper[4881]: E0121 13:38:07.311679 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.342907 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.342965 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.398148 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.557123 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.557431 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.642477 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.853727 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.020855 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.021280 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.069109 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.193308 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.193583 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s88gg" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="registry-server" containerID="cri-o://68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734" gracePeriod=2 Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.706697 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.750373 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities\") pod \"ed14e1b3-9440-4f92-a793-683eb01e4401\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.750466 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content\") pod \"ed14e1b3-9440-4f92-a793-683eb01e4401\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.750686 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b4dz\" (UniqueName: \"kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz\") pod \"ed14e1b3-9440-4f92-a793-683eb01e4401\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.751413 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities" (OuterVolumeSpecName: "utilities") pod "ed14e1b3-9440-4f92-a793-683eb01e4401" (UID: "ed14e1b3-9440-4f92-a793-683eb01e4401"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.757155 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz" (OuterVolumeSpecName: "kube-api-access-7b4dz") pod "ed14e1b3-9440-4f92-a793-683eb01e4401" (UID: "ed14e1b3-9440-4f92-a793-683eb01e4401"). InnerVolumeSpecName "kube-api-access-7b4dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.791836 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed14e1b3-9440-4f92-a793-683eb01e4401" (UID: "ed14e1b3-9440-4f92-a793-683eb01e4401"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.853132 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b4dz\" (UniqueName: \"kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.853169 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.853179 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.956381 4881 generic.go:334] "Generic (PLEG): container finished" podID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerID="68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734" exitCode=0 Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.956445 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.956554 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerDied","Data":"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734"} Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.956604 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerDied","Data":"3128a5db6427ef859cbb254cfb64ea3e3fe6d1a8d86c2c240331ac40ce10660b"} Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.956624 4881 scope.go:117] "RemoveContainer" containerID="68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.980348 4881 scope.go:117] "RemoveContainer" containerID="84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.001292 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.017011 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.022650 4881 scope.go:117] "RemoveContainer" containerID="a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.070023 4881 scope.go:117] "RemoveContainer" containerID="68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734" Jan 21 13:38:11 crc kubenswrapper[4881]: E0121 13:38:11.070568 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734\": container with ID starting with 68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734 not found: ID does not exist" containerID="68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.070631 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734"} err="failed to get container status \"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734\": rpc error: code = NotFound desc = could not find container \"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734\": container with ID starting with 68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734 not found: ID does not exist" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.070667 4881 scope.go:117] "RemoveContainer" containerID="84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22" Jan 21 13:38:11 crc kubenswrapper[4881]: E0121 13:38:11.071176 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22\": container with ID starting with 84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22 not found: ID does not exist" containerID="84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.071207 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22"} err="failed to get container status \"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22\": rpc error: code = NotFound desc = could not find container \"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22\": container with ID starting with 84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22 not found: ID does not exist" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.071229 4881 scope.go:117] "RemoveContainer" containerID="a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100" Jan 21 13:38:11 crc kubenswrapper[4881]: E0121 13:38:11.071453 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100\": container with ID starting with a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100 not found: ID does not exist" containerID="a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.071484 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100"} err="failed to get container status \"a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100\": rpc error: code = NotFound desc = could not find container \"a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100\": container with ID starting with a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100 not found: ID does not exist" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.327962 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" path="/var/lib/kubelet/pods/ed14e1b3-9440-4f92-a793-683eb01e4401/volumes" Jan 21 13:38:17 crc kubenswrapper[4881]: I0121 13:38:17.607975 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:17 crc kubenswrapper[4881]: I0121 13:38:17.664566 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:38:18 crc kubenswrapper[4881]: I0121 13:38:18.047693 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lr68z" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="registry-server" containerID="cri-o://fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6" gracePeriod=2 Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.017683 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.129706 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities\") pod \"03907694-a0e6-40d6-8142-9f20169ffe16\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.130044 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content\") pod \"03907694-a0e6-40d6-8142-9f20169ffe16\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.130077 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcnx5\" (UniqueName: \"kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5\") pod \"03907694-a0e6-40d6-8142-9f20169ffe16\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.132693 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities" (OuterVolumeSpecName: "utilities") pod "03907694-a0e6-40d6-8142-9f20169ffe16" (UID: "03907694-a0e6-40d6-8142-9f20169ffe16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.137448 4881 generic.go:334] "Generic (PLEG): container finished" podID="03907694-a0e6-40d6-8142-9f20169ffe16" containerID="fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6" exitCode=0 Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.137505 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerDied","Data":"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6"} Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.137541 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.137554 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerDied","Data":"17ba08f13a57d780ffed935060230f2347bf7739e7e17de9f0c3d10f0e502757"} Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.137607 4881 scope.go:117] "RemoveContainer" containerID="fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.141706 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5" (OuterVolumeSpecName: "kube-api-access-pcnx5") pod "03907694-a0e6-40d6-8142-9f20169ffe16" (UID: "03907694-a0e6-40d6-8142-9f20169ffe16"). InnerVolumeSpecName "kube-api-access-pcnx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.183893 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03907694-a0e6-40d6-8142-9f20169ffe16" (UID: "03907694-a0e6-40d6-8142-9f20169ffe16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.218171 4881 scope.go:117] "RemoveContainer" containerID="b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.233304 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.233347 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.233364 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcnx5\" (UniqueName: \"kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.245198 4881 scope.go:117] "RemoveContainer" containerID="2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.300359 4881 scope.go:117] "RemoveContainer" containerID="fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6" Jan 21 13:38:19 crc kubenswrapper[4881]: E0121 13:38:19.301964 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6\": container with ID starting with fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6 not found: ID does not exist" containerID="fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.302018 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6"} err="failed to get container status \"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6\": rpc error: code = NotFound desc = could not find container \"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6\": container with ID starting with fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6 not found: ID does not exist" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.302061 4881 scope.go:117] "RemoveContainer" containerID="b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8" Jan 21 13:38:19 crc kubenswrapper[4881]: E0121 13:38:19.304068 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8\": container with ID starting with b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8 not found: ID does not exist" containerID="b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.304116 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8"} err="failed to get container status \"b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8\": rpc error: code = NotFound desc = could not find container \"b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8\": container with ID starting with b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8 not found: ID does not exist" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.304150 4881 scope.go:117] "RemoveContainer" containerID="2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d" Jan 21 13:38:19 crc kubenswrapper[4881]: E0121 13:38:19.304451 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d\": container with ID starting with 2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d not found: ID does not exist" containerID="2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.304481 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d"} err="failed to get container status \"2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d\": rpc error: code = NotFound desc = could not find container \"2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d\": container with ID starting with 2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d not found: ID does not exist" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.311176 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:38:19 crc kubenswrapper[4881]: E0121 13:38:19.311737 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.467039 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.476464 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:38:20 crc kubenswrapper[4881]: I0121 13:38:20.092498 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:21 crc kubenswrapper[4881]: I0121 13:38:21.337851 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" path="/var/lib/kubelet/pods/03907694-a0e6-40d6-8142-9f20169ffe16/volumes" Jan 21 13:38:22 crc kubenswrapper[4881]: I0121 13:38:22.453566 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:38:22 crc kubenswrapper[4881]: I0121 13:38:22.454210 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zw7gl" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="registry-server" containerID="cri-o://005380c69dc02bb03b813c5b9b36612ee450bef0fa7fc34d08e62eb7b603f7e6" gracePeriod=2 Jan 21 13:38:23 crc kubenswrapper[4881]: I0121 13:38:23.184101 4881 generic.go:334] "Generic (PLEG): container finished" podID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerID="005380c69dc02bb03b813c5b9b36612ee450bef0fa7fc34d08e62eb7b603f7e6" exitCode=0 Jan 21 13:38:23 crc kubenswrapper[4881]: I0121 13:38:23.184150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerDied","Data":"005380c69dc02bb03b813c5b9b36612ee450bef0fa7fc34d08e62eb7b603f7e6"} Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.262187 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerDied","Data":"7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a"} Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.262246 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.332868 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.457276 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content\") pod \"bf33fd22-6287-45a0-a95d-52c731fdda8d\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.457581 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities\") pod \"bf33fd22-6287-45a0-a95d-52c731fdda8d\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.457639 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rrct\" (UniqueName: \"kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct\") pod \"bf33fd22-6287-45a0-a95d-52c731fdda8d\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.458747 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities" (OuterVolumeSpecName: "utilities") pod "bf33fd22-6287-45a0-a95d-52c731fdda8d" (UID: "bf33fd22-6287-45a0-a95d-52c731fdda8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.484674 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct" (OuterVolumeSpecName: "kube-api-access-8rrct") pod "bf33fd22-6287-45a0-a95d-52c731fdda8d" (UID: "bf33fd22-6287-45a0-a95d-52c731fdda8d"). InnerVolumeSpecName "kube-api-access-8rrct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.518810 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf33fd22-6287-45a0-a95d-52c731fdda8d" (UID: "bf33fd22-6287-45a0-a95d-52c731fdda8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.560740 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.560774 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rrct\" (UniqueName: \"kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.560801 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:25 crc kubenswrapper[4881]: I0121 13:38:25.275530 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:25 crc kubenswrapper[4881]: I0121 13:38:25.355815 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:38:25 crc kubenswrapper[4881]: I0121 13:38:25.358899 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:38:27 crc kubenswrapper[4881]: I0121 13:38:27.324380 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" path="/var/lib/kubelet/pods/bf33fd22-6287-45a0-a95d-52c731fdda8d/volumes" Jan 21 13:38:33 crc kubenswrapper[4881]: I0121 13:38:33.318170 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:38:33 crc kubenswrapper[4881]: E0121 13:38:33.320742 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:38:33 crc kubenswrapper[4881]: I0121 13:38:33.768824 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 13:38:33 crc kubenswrapper[4881]: I0121 13:38:33.768894 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 13:38:44 crc kubenswrapper[4881]: I0121 13:38:44.459514 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:38:44 crc kubenswrapper[4881]: E0121 13:38:44.460190 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:38:55 crc kubenswrapper[4881]: I0121 13:38:55.311819 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:38:55 crc kubenswrapper[4881]: E0121 13:38:55.312750 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:39:09 crc kubenswrapper[4881]: I0121 13:39:09.311280 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:39:09 crc kubenswrapper[4881]: E0121 13:39:09.313073 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:39:20 crc kubenswrapper[4881]: I0121 13:39:20.311360 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:39:20 crc kubenswrapper[4881]: E0121 13:39:20.312823 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:39:35 crc kubenswrapper[4881]: I0121 13:39:35.311681 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:39:35 crc kubenswrapper[4881]: E0121 13:39:35.313700 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:39:49 crc kubenswrapper[4881]: I0121 13:39:49.311302 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:39:49 crc kubenswrapper[4881]: E0121 13:39:49.312073 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:40:00 crc kubenswrapper[4881]: I0121 13:40:00.312167 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:40:00 crc kubenswrapper[4881]: E0121 13:40:00.313460 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:40:11 crc kubenswrapper[4881]: I0121 13:40:11.317312 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:40:11 crc kubenswrapper[4881]: E0121 13:40:11.318102 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:40:23 crc kubenswrapper[4881]: I0121 13:40:23.318401 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:40:23 crc kubenswrapper[4881]: E0121 13:40:23.324251 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:40:38 crc kubenswrapper[4881]: I0121 13:40:38.312663 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:40:38 crc kubenswrapper[4881]: E0121 13:40:38.313522 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:40:51 crc kubenswrapper[4881]: I0121 13:40:51.311566 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:40:51 crc kubenswrapper[4881]: E0121 13:40:51.312819 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:41:06 crc kubenswrapper[4881]: I0121 13:41:06.312078 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:41:06 crc kubenswrapper[4881]: E0121 13:41:06.313276 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:41:17 crc kubenswrapper[4881]: I0121 13:41:17.311311 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:41:17 crc kubenswrapper[4881]: E0121 13:41:17.311987 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:41:29 crc kubenswrapper[4881]: I0121 13:41:29.310740 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:41:29 crc kubenswrapper[4881]: E0121 13:41:29.311929 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:41:41 crc kubenswrapper[4881]: I0121 13:41:41.311019 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:41:42 crc kubenswrapper[4881]: I0121 13:41:42.363730 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"5d5a67903992fb662b7e04fe2469b9d92cb257eabe2ba374576c606306072e01"} Jan 21 13:43:59 crc kubenswrapper[4881]: I0121 13:43:59.852444 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:43:59 crc kubenswrapper[4881]: I0121 13:43:59.853304 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:44:29 crc kubenswrapper[4881]: I0121 13:44:29.851491 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:44:29 crc kubenswrapper[4881]: I0121 13:44:29.852126 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:44:37 crc kubenswrapper[4881]: I0121 13:44:37.134548 4881 scope.go:117] "RemoveContainer" containerID="005380c69dc02bb03b813c5b9b36612ee450bef0fa7fc34d08e62eb7b603f7e6" Jan 21 13:44:37 crc kubenswrapper[4881]: I0121 13:44:37.180058 4881 scope.go:117] "RemoveContainer" containerID="c8b5a836281ab5b467d91cb111b8bde5e2a3b2341cf2889f854337a51110a7f2" Jan 21 13:44:37 crc kubenswrapper[4881]: I0121 13:44:37.272547 4881 scope.go:117] "RemoveContainer" containerID="1464e8dac96b23af6bad563afba50c099ee6ffdb3c7eb1c93e0ab2b66618e523" Jan 21 13:44:59 crc kubenswrapper[4881]: I0121 13:44:59.851625 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:44:59 crc kubenswrapper[4881]: I0121 13:44:59.852279 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:44:59 crc kubenswrapper[4881]: I0121 13:44:59.852339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:44:59 crc kubenswrapper[4881]: I0121 13:44:59.853383 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d5a67903992fb662b7e04fe2469b9d92cb257eabe2ba374576c606306072e01"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:44:59 crc kubenswrapper[4881]: I0121 13:44:59.853476 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://5d5a67903992fb662b7e04fe2469b9d92cb257eabe2ba374576c606306072e01" gracePeriod=600 Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.064983 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="5d5a67903992fb662b7e04fe2469b9d92cb257eabe2ba374576c606306072e01" exitCode=0 Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.065052 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"5d5a67903992fb662b7e04fe2469b9d92cb257eabe2ba374576c606306072e01"} Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.065228 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161079 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8"] Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161598 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161617 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161631 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161637 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161655 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161661 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161672 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161733 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161745 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161750 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161765 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161770 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161780 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161799 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161816 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161822 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161840 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161846 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.162080 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.162103 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.162112 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.162956 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.165462 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.166857 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.191642 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8"] Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.215034 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kkdt\" (UniqueName: \"kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.215107 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.215215 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.317096 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kkdt\" (UniqueName: \"kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.317341 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.317387 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.318386 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.334531 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.340668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kkdt\" (UniqueName: \"kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.484088 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:01 crc kubenswrapper[4881]: I0121 13:45:01.023162 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8"] Jan 21 13:45:01 crc kubenswrapper[4881]: I0121 13:45:01.077505 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" event={"ID":"26bc618a-da67-42a8-a7bb-d387e43c3b07","Type":"ContainerStarted","Data":"7d144563e9481a5fd3724ac8a32737ad5c62afd07039e96c32a51ff9a35213a8"} Jan 21 13:45:01 crc kubenswrapper[4881]: I0121 13:45:01.079746 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"113d1373287853d89aa9f3d38980901d710b940a1b9ccbd9225bbeb2e3770216"} Jan 21 13:45:02 crc kubenswrapper[4881]: I0121 13:45:02.092077 4881 generic.go:334] "Generic (PLEG): container finished" podID="26bc618a-da67-42a8-a7bb-d387e43c3b07" containerID="719e3859e6f66471c6e2f81f0e16f40576c22800a6d3e0c44b5d268011817fa6" exitCode=0 Jan 21 13:45:02 crc kubenswrapper[4881]: I0121 13:45:02.092212 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" event={"ID":"26bc618a-da67-42a8-a7bb-d387e43c3b07","Type":"ContainerDied","Data":"719e3859e6f66471c6e2f81f0e16f40576c22800a6d3e0c44b5d268011817fa6"} Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.098717 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.115386 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" event={"ID":"26bc618a-da67-42a8-a7bb-d387e43c3b07","Type":"ContainerDied","Data":"7d144563e9481a5fd3724ac8a32737ad5c62afd07039e96c32a51ff9a35213a8"} Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.115459 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d144563e9481a5fd3724ac8a32737ad5c62afd07039e96c32a51ff9a35213a8" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.115549 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.205294 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume\") pod \"26bc618a-da67-42a8-a7bb-d387e43c3b07\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.205592 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kkdt\" (UniqueName: \"kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt\") pod \"26bc618a-da67-42a8-a7bb-d387e43c3b07\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.205678 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume\") pod \"26bc618a-da67-42a8-a7bb-d387e43c3b07\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.206944 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume" (OuterVolumeSpecName: "config-volume") pod "26bc618a-da67-42a8-a7bb-d387e43c3b07" (UID: "26bc618a-da67-42a8-a7bb-d387e43c3b07"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.213779 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "26bc618a-da67-42a8-a7bb-d387e43c3b07" (UID: "26bc618a-da67-42a8-a7bb-d387e43c3b07"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.213938 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt" (OuterVolumeSpecName: "kube-api-access-8kkdt") pod "26bc618a-da67-42a8-a7bb-d387e43c3b07" (UID: "26bc618a-da67-42a8-a7bb-d387e43c3b07"). InnerVolumeSpecName "kube-api-access-8kkdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.308910 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kkdt\" (UniqueName: \"kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.308945 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.308959 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:05 crc kubenswrapper[4881]: I0121 13:45:05.203407 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4"] Jan 21 13:45:05 crc kubenswrapper[4881]: I0121 13:45:05.216799 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4"] Jan 21 13:45:05 crc kubenswrapper[4881]: I0121 13:45:05.323384 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d03c94-fe93-4321-a2a8-44fc4e42cecf" path="/var/lib/kubelet/pods/a3d03c94-fe93-4321-a2a8-44fc4e42cecf/volumes"